paperhash
stringlengths 40
40
| s2_corpus_id
stringlengths 3
9
| arxiv_id
stringclasses 0
values | title
stringlengths 7
324
| abstract
stringlengths 0
7.23k
⌀ | authors
sequence | summary
stringclasses 0
values | field_of_study
sequencelengths | venue
stringlengths 15
253
⌀ | publication_date
stringdate 1952-06-01 00:00:00
2019-07-01 00:00:00
| n_references
int32 0
4.92k
⌀ | n_citations
int32 0
84.2k
⌀ | n_influential_citations
int32 | introduction
stringlengths 15
173k
⌀ | background
stringlengths 2
115k
⌀ | methodology
stringlengths 40
140k
⌀ | experiments_results
stringlengths 1
142k
⌀ | conclusion
stringlengths 7
38k
⌀ | full_text
stringlengths 29
195k
| decision
bool 0
classes | decision_text
stringclasses 0
values | reviews
sequence | comments
sequence | references
sequence | hypothesis
stringlengths 105
1.27k
⌀ | month_since_publication
int32 67
872
| avg_citations_per_month
float32 0
1.24k
⌀ | mean_score
float32 | mean_confidence
float32 | mean_novelty
float32 | mean_correctness
float32 | mean_clarity
float32 | mean_impact
float32 | mean_reproducibility
float32 | openreview_submission_id
stringclasses 0
values |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
dc44994ab438f6a8dac1c732fd3bc83465e11a7f | 45511699 | null | {\O}stasiatisk Instituts {EDB}-arbejde (The {E}ast {A}sian Institute{'}s work with {EDB}) [In {E}nglish] | Japansk poesl (waka): Test of small sample (600 poems). Output to paper tape and Katakaria type-head. Japanese can now be typed directly on the data screen, s o enquiries are being made to have output in Japan. Researcher: Yoichi Nagashima. Kine^lsk malerl, 19^9-1979. The changes in China eu'e seen in the paintings reproduced in periodicals, albums, etc. 4 0 0 0 titles (will go to 7 0 0 0 eventually), adaptable to MARC f ield s . Researcher: Simon Heilesen. Binomer i kinesisk: The "word" in Chinese is usually a binome, i.e., two Chinese characters. | {
"name": [
"Grinstead, Eric"
],
"affiliation": [
null
]
} | null | null | Proceedings of the 2nd Nordic Conference of Computational Linguistics ({NODALIDA} 1979) | 1979-10-01 | 0 | 0 | null | The first two numbers can be placed in a single 3^-8it word arid uhe location used for additional information gained from statistical work. I would like to add a figure for r a n k even before the whole dictionary is coded.As the dictionary I am using is a Chin^se-Japanese d i c t i o n a the results are of interest to the Japanese, and some cooperation is envisaged.One-third of the dictionai'y (12 vols) . Researcher: Erie Grinstead.Ø stasiatisk Instituts EDB-arbejde Eric Grinstead Proceedings of NODALIDA 1979, pages 65-65 | null | null | null | null | Main paper:
99:
The first two numbers can be placed in a single 3^-8it word arid uhe location used for additional information gained from statistical work. I would like to add a figure for r a n k even before the whole dictionary is coded.As the dictionary I am using is a Chin^se-Japanese d i c t i o n a the results are of interest to the Japanese, and some cooperation is envisaged.One-third of the dictionai'y (12 vols) . Researcher: Erie Grinstead.Ø stasiatisk Instituts EDB-arbejde Eric Grinstead Proceedings of NODALIDA 1979, pages 65-65
Appendix:
| null | null | null | null | {
"paperhash": [],
"title": [],
"abstract": [],
"authors": [],
"arxiv_id": [],
"s2_corpus_id": [],
"intents": [],
"isInfluential": []
} | null | 544 | 0 | null | null | null | null | null | null | null | null |
4ded3c008171737491d8f5ca46abfe91ba9eaad1 | 30313517 | null | Automatisk orddeling (Automatic word-splitting) [In {D}anish] | In d le d n in g Når man skriver på et stykke papir, er man tvunget til at tage en beslutning, når man naermer sig papirets højre kant: Skal hele det følgende ord flyttes ned på naeste linie, eller skal det deles? Vilkårligheden af dette valg understreger, at en orddeling er et nødvendigt onde og ikke i sig selv rummer nogen information i modsaetning til fx en indrykket linie, der markerer starten af et nyt afsnit. Det er derfor et rimeligt krav, at informationsindholdet i det delte ord bevares, således at orddelinger, der medfører midlertidig forvirring eller permanente misforståelser, undgås. På trods af dette har den stadigt stigende anvendelse af automatisk tekstbehandlingsudstyr medført, at megen trykt tekst indeholder et stort antal forkerte orddelinger. Udviklingen er isaer tydelig inden for avisproduktion, hvor korrekturlaesning ofte foregår før ombrydning, og dermed før de automatiske orddelinger er udført. Nedenstående typiske eksempler på forkerte orddelinger er således fundet i de danske aviser Politiken og Ekstra Bladet. | {
"name": [
"Hansson, Hasse"
],
"affiliation": [
null
]
} | null | null | Proceedings of the 2nd Nordic Conference of Computational Linguistics ({NODALIDA} 1979) | 1979-10-01 | 0 | 0 | null | Det ses, at linie 4 og 7 i øverste spalte indeholder uacceptabelt store mellemrum, og at den nederste spalte er 1 linie kortere end den øverste. Konklusionen må derfor vaere, at orddeling er nødvendig, dels for at få rimeligt udfyldte linier, dels for at opnå en papirbesparelse. I perioden februar -september 1979 har jeg derfor i samarbejde med lektor Bente Maegaard fra Institut for anvendt og matematisk lingvistik, Københavns Universitet, udviklet en algoritme til automatisk deling af danske ord. Denne artikel beskriver dels de metoder, som vi har udviklet til brug for algoritmefremstillingen, dels den faerdige algoritme. Det forudsaettes, at laeseren har et godt kendskab til det danske sprog; endvidere vil kendskab til programmeringssproget Pascal eller et andet ALGOL-lignende sprog vaere ønskeligt.R e g le r for d a n sk ordd eling Målet for algoritmeudviklingen er at fremstille en algoritme, der i så mange tilfaelde som muligt deler ordene i overensstemmelse med de regler for dansk orddeling, der er angivet i Retskrivningsordbogen [ 1 ]. Formålet med dette afsnit er at analysere reglerne for at klargøre i hvor stort omfang, de lader sig implementere på en datamat. i) "Sammensatte ord deles efter deres bestanddele, når disse er let kendelige." På dansk kan sammensatte ord frit dannes ved at skrive de to ord, der danner sammensaetningen, uden et adskillende mellemrum, hvorimod man fx på engelsk i stor udstraekning beholder mellemrummet eller anvender en bindestreg. På trods af reglens klarhed er der derfor store vanskeligheder forbundet med maskinelt at afgøre, om et ord er sammensat.ii) "Afledninger deles ligesom sammensaetninger efter deres bestanddele, når disse er let kendelige." Reglen er en parallel til i); men den er mindre konsekvent, idet det er usikkert, hvornår en afledning er "let kendelig". Antallet af afledningselementer, affikser,'* er dog ret begraenset, hvorfor denne regel kan implementeres, hvis det er muligt at opstille fuldstaendige lister over de affikser, der accepteres som let kendelige.iii) "For usammensatte ord gaelder følgende regler:" 1) "En medlyd mellem to selvlyd skrives sammen med den sidste selvlyd." Denne regel kan umiddelbart implementeres; men der er dog, som bemaerket af Spang Hanssen [ 2 ], en tradition for at skrive konsonanten x sammen med den første vokal. 2) "Af to medlyd mellem selvlyd går en til hver linie; ..." Implementation af denne simple regel vanskeliggøres af følgende undtagelser (isaer b) og c)): a) "sk, sp, st kan gå sammen til ny linie." Der er tradition for at udnytte denne mulighed, der uden vanskelighed kan implementeres. b) "I ord, der ikke har tryk på første stavelse, går begge medlyd til den trykstaerke Regel ii) kraever, at afledninger deles efter deres bestanddele, når disse er let kendelige. Det er ikke muligt at give en praecis definition af, hvornår en afledning er "let kendelig"; men det er dog muligt at opstille en liste med et ret begraenset antal elementer, der accepteres som "let kendelige" afledninger af vide kredse. Det er derfor muligt ved hjaelp af affikslister at implementere denne regel således, at der opstår meget få fejl. ............................................................... 5.000 ord Ialt ................................................................................ 358.000 Resultatet af analysen er en 20 x 20 matriks, der for hver konsonantkombination angiver, om der skal deles til venstre, mellem eller til højre for de to konsonanter. Endelig indeholder matricen oplysninger om de kombinationer, der er usikre; anvendelsen af denne oplysning vil blive forklaret senere.Ved kombinationer af tre og flere konsonanter er mulighederne så talrige, at det ikke er hensigtsmaessigt på tilsvarende måde at opstille en matriks i tre eller flere dimensioner. Dette skyldes dels sådanne matricers mange elementer, dels at de fleste konsonantkombinationer er utaenkelige i praksis og derfor uinteressante. I stedet har vi ud fra analysen af materialet opstillet en endelig tilstandsautomat, hvor man -populaert sagt -kun registrerer de kombinationer, som man er interesseret i. For at begraense automatens størrelse indeholder den ikke de konsonantkombinationer, som man kan dele korrekt ved at anvende den tidligere omtalte tokonsonantmatriks på de to konsonanter umiddelbart foran den anden vokal. Et eksempel er ordet overblik, hvor matricen giver delingen -b l og dermed over-blik.Der er dog stadig mange kombinationer, som må klares ved en tre-eller endda firkonsonantanalyse. Mange kombinationer forekommer kun i sammensaetninger, hvorfor mange sammensatte ord bliver delt korrekt netop på grund af automaten. Nedenfor vises den lille del af automaten, der finder en orddeling i alle konsonantkombinationer, der indledes med konsonanterne Is.2. R E C K U -N y t er en brugerorientering, der u d se n d e s ca. 1 0 g a n g e årligt af R E C K U . D et R eg ion ale Edb>center ved K ø b e n h a v n s Universitet.3. Forskellige bøjningsform er af sa m m e ord stam m e regn es i d enne forbindelse for forskellige ord.Proceedings of NODALIDA 1979Dansk orddeling følger ligesom fx fransk et fonetisk princip, hvorfor den konsonantkombination, der forekommer efter delestregen, skal kunne forekomme i begyndelsen af et ord. Når man derfor ved en af de ovenfor naevnte metoder har fundet en orddeling, undersøges om kombinationen er lovlig. Er dette ikke tilfaeldet, flyttes bindestregen mod højre, indtil kravet er opfyldt. Denne fonetiske kontrol forhindrer isaer de fejl, der opstår ved forsøg på genkendelse af et affiks, hvor den genkendte tegnfølge i det givne ord viser sig at indgå i en ganske anden sammenhaeng. Et eksempel er ordet angsten, hvor praefikset an genkendes og ordet derfor deles a n^g ste n . Den fonetiske kontrol sikrer, at g s ikke forekommer efter delestregen, og delingen rettes derfor til a n g s t e n .Vi vil i det følgende sammenfatte de hidtil omtalte metoder under betegnelsen de algoritm iske m etoder. Selvom disse metoder forfines, vil der stadig findes ord, der ikke deles korrekt. For alligevel at kunne dele sådanne ord rigtigt opbygges en und ta gelsesordb og, der indeholder alle de ord, som erfaringsmaessigt giver anledning til fejl. På grund af det store antal opslag i en sådan ordbog er det vigtigt, at den organiseres således, at søgning er effektiv; specielt skal søgning efter ord, der ikke findes i ordbogen, vaere effektiv. Dette opnås ved at benytte en såkaldt hashteknik. Algoritmen har i linie 3 -5 arbejdet på hele ordet; men i linie 6-22 findes en løkke, som anvendes til at finde delepunkter mellem de enkelte stavelser. Betingelsen for at der findes et delepunkt mellem to stavelser i denne løkke er, at der ikke allerede i linie 3 -5 er registreret et delepunkt mellem de pågaeldende stavelser. I linie 8 gennemløbes to affikslister, hvor den ene indeholder de affikser, som der skal deles efter, medens den anden består af de affikser, som der skal deles før. Hvis et sådant affiks ikke 39 Proceedings of NODALIDA 1979 genkendes, antages stavelsen at vaere en del af et usammensat ord, og der foregår derfor en opsplitning efter antallet af konsonanter ved hjaelp af CASE-saetningen i linie 11.Det tilfaelde, hvor der ikke findes nogen konsonanter mellem de to vokaler, der afgraenser stavelsen, behandles i linie 12. Et sådant v o k a lsa m m e n stø d kan altid deles i oprindeligt danske ord; men i mange fremmedord betegner de to vokaler en diftong, som ikke kan deles. For eksempel kan hverken ea eller au deles i ordet niveau. Der er derfor opbygget en lille tabel, der angiver mellem hvilke vokaler, en deling er tilladelig. Tilfaeldet 1 konsonant klares umiddelbart i linie 13, medens tilfaeldet 2 konsonanter løses ved opslag i tokonsonantmatricen. Ved tre eller flere konsonanter anvendes tilstandsautomaten; indeholder den ingen løsning, anvendes tokonsonantmatricen på de to konsonanter umiddelbart før den anden vokal. Da automatisk orddeling er en kilde til fejl, bør orddelingsalgoritmen ikke aktiveres, hvis man uden orddeling kan opnå en linie af typografisk acceptabelt udseende. Man kan ved hjaelp af følgende fire parametre styre dels antallet af delinger, dels udseendet af disse: 1) Aktiv eller passiv 2) Relativ spildfaktor 5. RECKU er en forkortelse for Det Regionale Edb-center ved Københavns Universitet.3) Sikkerhedsniveau 4) Minimum antal tegn før og efter delestreg Første parameter giver mulighed for at hindre, at orddelingsalgoritmen aktiveres. Dette har isaer betydning, hvis kildesproget ikke er dansk, da algoritmen er uanvendelig til andre sprog end dansk.Anden parameter, den relative spildfaktor, angiver den procentdel af linien, der skal vaere ubrugt før orddelingsalgoritmen aktiveres. Angives en stor spildfaktor, fås få orddelinger men til gengaeld også en "løs" og dermed uøkonomisk sats; omvendt vil man ved en lille spildfaktor få en "taet" sats med mange delinger. Den bedste spildfaktor afhaenger af personlig smag; men en vaerdi mellem 5 og 10 procent giver saedvanligvis gode resultater. Det er vaesentligt, at spildfaktoren er relativ: En lang linie giver bedre mulighed for at fordele en vis uudnyttet plads end en kort linie.Ved omtalen af tokonsonantmatricen blev det naevnt, at visse kombinationer bliver registreret som usikre; men også nogle få konsonanter i enkonsonantforbindelser samt enkelte kombinationer i tilstandsautomaten er registreret som usikre. Ved at angive et sikkerh ed sniv ea u for orddelinger kan man undgå, at ord deles i usikre delepunkter, og man kan ligeledes vaelge kun at dele ord i sikre delepunkter, hvortil regnes forekomster af bindestreg, komma eller skråstreg i ordet. Sidste mulighed har isaer betydning for engelsk, hvor man på denne måde har mulighed for at dele alle de sammensaetninger, der er dannet ved hjaelp af en bindestreg.Sidste parameter angiver det minimale antal bogstaver, der skal findes såvel før som efter delepunktet. Der er almindelig enighed om, at to bogstaver må vaere minimum; men et minimum på tre tegn foretraekkes undertiden. PHOTODOC praesenterer alle orddelinger for brugeren, idet der indsaettes et blanktegn på det sted i ordet, hvor datamaten mener, at ordet skal deles. Stjernen indsaettes for at markere den del af ordet, der maksimalt kan stå før delestregen. Det første delte ord i eksemplet, tekstbehandling, vil datamaten således dele tekstbe-handling, og delestregen kan ikke indsaettes senere end efter h. Accepteres orddelingen, trykker brugeren blot på vognreturknappen, i modsat fald skrives ordet med et blanktegn indsat på det sted, hvor ordet ønskes delt. Et eksempel er vist ved ordet trykfaerdig, som brugeren ønsker delt tryk-faerdig. Af effektivitetsgrunde kan den del af ordet, der skrives efter blanktegnet, dog udelades.Endelig vises eksempler på de to fejl, som brugeren kan begå. I det første tilfaelde forsøger man at indsaette en delestreg til højre for maksimumspunktet, og i det andet tilfaelde staves ordet forkert ved rettelsen. I begge tilfaelde udskrives en fejlmeddelse, hvorefter brugeren har mulighed for at rette fejlen. | null | null | null | null | Main paper:
:
Det ses, at linie 4 og 7 i øverste spalte indeholder uacceptabelt store mellemrum, og at den nederste spalte er 1 linie kortere end den øverste. Konklusionen må derfor vaere, at orddeling er nødvendig, dels for at få rimeligt udfyldte linier, dels for at opnå en papirbesparelse. I perioden februar -september 1979 har jeg derfor i samarbejde med lektor Bente Maegaard fra Institut for anvendt og matematisk lingvistik, Københavns Universitet, udviklet en algoritme til automatisk deling af danske ord. Denne artikel beskriver dels de metoder, som vi har udviklet til brug for algoritmefremstillingen, dels den faerdige algoritme. Det forudsaettes, at laeseren har et godt kendskab til det danske sprog; endvidere vil kendskab til programmeringssproget Pascal eller et andet ALGOL-lignende sprog vaere ønskeligt.R e g le r for d a n sk ordd eling Målet for algoritmeudviklingen er at fremstille en algoritme, der i så mange tilfaelde som muligt deler ordene i overensstemmelse med de regler for dansk orddeling, der er angivet i Retskrivningsordbogen [ 1 ]. Formålet med dette afsnit er at analysere reglerne for at klargøre i hvor stort omfang, de lader sig implementere på en datamat. i) "Sammensatte ord deles efter deres bestanddele, når disse er let kendelige." På dansk kan sammensatte ord frit dannes ved at skrive de to ord, der danner sammensaetningen, uden et adskillende mellemrum, hvorimod man fx på engelsk i stor udstraekning beholder mellemrummet eller anvender en bindestreg. På trods af reglens klarhed er der derfor store vanskeligheder forbundet med maskinelt at afgøre, om et ord er sammensat.ii) "Afledninger deles ligesom sammensaetninger efter deres bestanddele, når disse er let kendelige." Reglen er en parallel til i); men den er mindre konsekvent, idet det er usikkert, hvornår en afledning er "let kendelig". Antallet af afledningselementer, affikser,'* er dog ret begraenset, hvorfor denne regel kan implementeres, hvis det er muligt at opstille fuldstaendige lister over de affikser, der accepteres som let kendelige.iii) "For usammensatte ord gaelder følgende regler:" 1) "En medlyd mellem to selvlyd skrives sammen med den sidste selvlyd." Denne regel kan umiddelbart implementeres; men der er dog, som bemaerket af Spang Hanssen [ 2 ], en tradition for at skrive konsonanten x sammen med den første vokal. 2) "Af to medlyd mellem selvlyd går en til hver linie; ..." Implementation af denne simple regel vanskeliggøres af følgende undtagelser (isaer b) og c)): a) "sk, sp, st kan gå sammen til ny linie." Der er tradition for at udnytte denne mulighed, der uden vanskelighed kan implementeres. b) "I ord, der ikke har tryk på første stavelse, går begge medlyd til den trykstaerke Regel ii) kraever, at afledninger deles efter deres bestanddele, når disse er let kendelige. Det er ikke muligt at give en praecis definition af, hvornår en afledning er "let kendelig"; men det er dog muligt at opstille en liste med et ret begraenset antal elementer, der accepteres som "let kendelige" afledninger af vide kredse. Det er derfor muligt ved hjaelp af affikslister at implementere denne regel således, at der opstår meget få fejl. ............................................................... 5.000 ord Ialt ................................................................................ 358.000 Resultatet af analysen er en 20 x 20 matriks, der for hver konsonantkombination angiver, om der skal deles til venstre, mellem eller til højre for de to konsonanter. Endelig indeholder matricen oplysninger om de kombinationer, der er usikre; anvendelsen af denne oplysning vil blive forklaret senere.Ved kombinationer af tre og flere konsonanter er mulighederne så talrige, at det ikke er hensigtsmaessigt på tilsvarende måde at opstille en matriks i tre eller flere dimensioner. Dette skyldes dels sådanne matricers mange elementer, dels at de fleste konsonantkombinationer er utaenkelige i praksis og derfor uinteressante. I stedet har vi ud fra analysen af materialet opstillet en endelig tilstandsautomat, hvor man -populaert sagt -kun registrerer de kombinationer, som man er interesseret i. For at begraense automatens størrelse indeholder den ikke de konsonantkombinationer, som man kan dele korrekt ved at anvende den tidligere omtalte tokonsonantmatriks på de to konsonanter umiddelbart foran den anden vokal. Et eksempel er ordet overblik, hvor matricen giver delingen -b l og dermed over-blik.Der er dog stadig mange kombinationer, som må klares ved en tre-eller endda firkonsonantanalyse. Mange kombinationer forekommer kun i sammensaetninger, hvorfor mange sammensatte ord bliver delt korrekt netop på grund af automaten. Nedenfor vises den lille del af automaten, der finder en orddeling i alle konsonantkombinationer, der indledes med konsonanterne Is.2. R E C K U -N y t er en brugerorientering, der u d se n d e s ca. 1 0 g a n g e årligt af R E C K U . D et R eg ion ale Edb>center ved K ø b e n h a v n s Universitet.3. Forskellige bøjningsform er af sa m m e ord stam m e regn es i d enne forbindelse for forskellige ord.Proceedings of NODALIDA 1979Dansk orddeling følger ligesom fx fransk et fonetisk princip, hvorfor den konsonantkombination, der forekommer efter delestregen, skal kunne forekomme i begyndelsen af et ord. Når man derfor ved en af de ovenfor naevnte metoder har fundet en orddeling, undersøges om kombinationen er lovlig. Er dette ikke tilfaeldet, flyttes bindestregen mod højre, indtil kravet er opfyldt. Denne fonetiske kontrol forhindrer isaer de fejl, der opstår ved forsøg på genkendelse af et affiks, hvor den genkendte tegnfølge i det givne ord viser sig at indgå i en ganske anden sammenhaeng. Et eksempel er ordet angsten, hvor praefikset an genkendes og ordet derfor deles a n^g ste n . Den fonetiske kontrol sikrer, at g s ikke forekommer efter delestregen, og delingen rettes derfor til a n g s t e n .Vi vil i det følgende sammenfatte de hidtil omtalte metoder under betegnelsen de algoritm iske m etoder. Selvom disse metoder forfines, vil der stadig findes ord, der ikke deles korrekt. For alligevel at kunne dele sådanne ord rigtigt opbygges en und ta gelsesordb og, der indeholder alle de ord, som erfaringsmaessigt giver anledning til fejl. På grund af det store antal opslag i en sådan ordbog er det vigtigt, at den organiseres således, at søgning er effektiv; specielt skal søgning efter ord, der ikke findes i ordbogen, vaere effektiv. Dette opnås ved at benytte en såkaldt hashteknik. Algoritmen har i linie 3 -5 arbejdet på hele ordet; men i linie 6-22 findes en løkke, som anvendes til at finde delepunkter mellem de enkelte stavelser. Betingelsen for at der findes et delepunkt mellem to stavelser i denne løkke er, at der ikke allerede i linie 3 -5 er registreret et delepunkt mellem de pågaeldende stavelser. I linie 8 gennemløbes to affikslister, hvor den ene indeholder de affikser, som der skal deles efter, medens den anden består af de affikser, som der skal deles før. Hvis et sådant affiks ikke 39 Proceedings of NODALIDA 1979 genkendes, antages stavelsen at vaere en del af et usammensat ord, og der foregår derfor en opsplitning efter antallet af konsonanter ved hjaelp af CASE-saetningen i linie 11.Det tilfaelde, hvor der ikke findes nogen konsonanter mellem de to vokaler, der afgraenser stavelsen, behandles i linie 12. Et sådant v o k a lsa m m e n stø d kan altid deles i oprindeligt danske ord; men i mange fremmedord betegner de to vokaler en diftong, som ikke kan deles. For eksempel kan hverken ea eller au deles i ordet niveau. Der er derfor opbygget en lille tabel, der angiver mellem hvilke vokaler, en deling er tilladelig. Tilfaeldet 1 konsonant klares umiddelbart i linie 13, medens tilfaeldet 2 konsonanter løses ved opslag i tokonsonantmatricen. Ved tre eller flere konsonanter anvendes tilstandsautomaten; indeholder den ingen løsning, anvendes tokonsonantmatricen på de to konsonanter umiddelbart før den anden vokal. Da automatisk orddeling er en kilde til fejl, bør orddelingsalgoritmen ikke aktiveres, hvis man uden orddeling kan opnå en linie af typografisk acceptabelt udseende. Man kan ved hjaelp af følgende fire parametre styre dels antallet af delinger, dels udseendet af disse: 1) Aktiv eller passiv 2) Relativ spildfaktor 5. RECKU er en forkortelse for Det Regionale Edb-center ved Københavns Universitet.3) Sikkerhedsniveau 4) Minimum antal tegn før og efter delestreg Første parameter giver mulighed for at hindre, at orddelingsalgoritmen aktiveres. Dette har isaer betydning, hvis kildesproget ikke er dansk, da algoritmen er uanvendelig til andre sprog end dansk.Anden parameter, den relative spildfaktor, angiver den procentdel af linien, der skal vaere ubrugt før orddelingsalgoritmen aktiveres. Angives en stor spildfaktor, fås få orddelinger men til gengaeld også en "løs" og dermed uøkonomisk sats; omvendt vil man ved en lille spildfaktor få en "taet" sats med mange delinger. Den bedste spildfaktor afhaenger af personlig smag; men en vaerdi mellem 5 og 10 procent giver saedvanligvis gode resultater. Det er vaesentligt, at spildfaktoren er relativ: En lang linie giver bedre mulighed for at fordele en vis uudnyttet plads end en kort linie.Ved omtalen af tokonsonantmatricen blev det naevnt, at visse kombinationer bliver registreret som usikre; men også nogle få konsonanter i enkonsonantforbindelser samt enkelte kombinationer i tilstandsautomaten er registreret som usikre. Ved at angive et sikkerh ed sniv ea u for orddelinger kan man undgå, at ord deles i usikre delepunkter, og man kan ligeledes vaelge kun at dele ord i sikre delepunkter, hvortil regnes forekomster af bindestreg, komma eller skråstreg i ordet. Sidste mulighed har isaer betydning for engelsk, hvor man på denne måde har mulighed for at dele alle de sammensaetninger, der er dannet ved hjaelp af en bindestreg.Sidste parameter angiver det minimale antal bogstaver, der skal findes såvel før som efter delepunktet. Der er almindelig enighed om, at to bogstaver må vaere minimum; men et minimum på tre tegn foretraekkes undertiden. PHOTODOC praesenterer alle orddelinger for brugeren, idet der indsaettes et blanktegn på det sted i ordet, hvor datamaten mener, at ordet skal deles. Stjernen indsaettes for at markere den del af ordet, der maksimalt kan stå før delestregen. Det første delte ord i eksemplet, tekstbehandling, vil datamaten således dele tekstbe-handling, og delestregen kan ikke indsaettes senere end efter h. Accepteres orddelingen, trykker brugeren blot på vognreturknappen, i modsat fald skrives ordet med et blanktegn indsat på det sted, hvor ordet ønskes delt. Et eksempel er vist ved ordet trykfaerdig, som brugeren ønsker delt tryk-faerdig. Af effektivitetsgrunde kan den del af ordet, der skrives efter blanktegnet, dog udelades.Endelig vises eksempler på de to fejl, som brugeren kan begå. I det første tilfaelde forsøger man at indsaette en delestreg til højre for maksimumspunktet, og i det andet tilfaelde staves ordet forkert ved rettelsen. I begge tilfaelde udskrives en fejlmeddelse, hvorefter brugeren har mulighed for at rette fejlen.
Appendix:
| null | null | null | null | {
"paperhash": [],
"title": [],
"abstract": [],
"authors": [],
"arxiv_id": [],
"s2_corpus_id": [],
"intents": [],
"isInfluential": []
} | null | 544 | 0 | null | null | null | null | null | null | null | null |
402ce895a700bcdf7c9fc619e0762f4b0e30bf86 | 489617 | null | Automatisk lemmatisering utan stamlexikon (Automatic lemmatization without stem lexica) [In {S}wedish] | AUTOMATISK LEMMATISERING UTAN STAMLEXIKON Några synpunkter tio år efteråt En tillbakablick För precis tio år sedan, hösten 1969, genomfördes det första stora leiranatiseringsarbetet vid Språkdata. Det var den bearbetning som kom att bilda grundmaterialet till Nusvensk frekvensordbok 2. Två | {
"name": [
"Gavare, Rolf"
],
"affiliation": [
null
]
} | null | null | Proceedings of the 2nd Nordic Conference of Computational Linguistics ({NODALIDA} 1979) | 1979-10-01 | 0 | 0 | null | Computerized Lemmatization without the Use of a Dictionary: A Case Study from Swedish LexicologyJ_^emniatizatjon, i.e., the bringing together of the inflectional forms (and variant forms) of a word under one heading, is one of the problems when making a frequency dictionary out of a large text corpus with the aid of a computer. Attempts have generally gone in the direction of confronting the material with an ordinary dictionary, presupposing that this dictionary would have an entry for practically every form in the corpus. This may be true for some texts, e.g., the classical ones, but it is definitely not true for a newspaper text corpus in a language like Swedish, which not only shows brand-new loan-words but is abundant in compounds of the more or less casual sort that will never appear in ordinary dictionaries. So the task we undertook in 1969-70 at the Research Group for Modern Swedish, University of Göteborg, was to lenunatize automatically about 112,000 different word forms without direct access to any existing dictionary. Homographs had been previously separated with the aid of a K W ic-index (the original number of different graphic words was about 103,000), a fact which meant that about one-third of the forms had been assigned grammatical information (word class and, roughly, gender or conjugation).'As Swedish contains no inflectional prefixes, the procedure can operate with an alphabetically sorted version of the material. The computer passes through that version, successively grouping the forms into lemmas and printing them out, so that the whole lemmatization can be checked manually afterwards.The program tests the first form in the projected lenuna against the ones following alphabetically, one at a time, providing they have not already been included in a previously finished lemma. As soon as a form appears which is not identical with the first form as far as the stem of the latter goes, the testing is stopped and the lemma finished. The form that heads the remainder of forms alphabetically is then chosen as the first form of the next lemma, and the procedure is repeated. Computen and the Humanities/Vol. 6, No. 4/March 1972 The "stem" of a lemma thus had to be defined as the part of the word that was identical in all its inflectional forms. The remnants of the forms were called "endings." Obviously, these definitions don't altogether coincide with the usual linguistic ones: the word titel, 'title,* plural titbr, for instance, got the stem titand the endings -el,-lar, etc., though linguistically, the stem should rather be tilland the plural ending -cr. An index was set up of those graphic sequences that might be endings in regular paradigms.^ Lexical regularity proved not to be the same thing as grammatical regularity; for instance, an irregular noun occurring as the latter element in many compounds had to be taken account of. The word man 'man,' plural män, is as irregubr as in English, but it appeared in over 1 SO compounds in the corpus, e.g., adelsman, 'nobleman,' plural adelsmän, and so a paradigm -an, -än, etc. was established. In all, S3 different paradigms were made the basis of the index, which contained 98 different endings. The flgures give a somewhat exaggerated idea of the complexity of Swedish morphology, as one linguistic paradigm often had to be split into two or more paradigms here: compare titel, titbr (above) with the endings -el, -br, etc. to stol, 'chair,' plural stobr with the endings -o, -or, etc.For two forms to be brought into the same lermna, they were required to have an identical stem and compatible endings, i.e., such as could belong to the same paradigm. Whether the identity actually covered the whole stem was decided by checking whether the remnants of the forms were possible endings. So the index here served two purposes: to identify the latter parts of the forms as endings, and to give access to what was called the alpha-list, where for each ending the endings compatible with it were stored. But for the former procedure to function properly, it was necessary that every graph or graphic sequence Y which could not itself be an ending but which had a counterpart X Y that was a possible ending appear in the index, where it was stored as a pseudo-ending with an empty alpha-row. An example is the final -/ which didn't occur in any paradigm, while the sequence -el did (see titel, titbr above). In all, 14 pseudo-endings were required.If the alpha test gave a negative result, it was repeated with the rightmost graph (roughly: letter) of the stem brought over to the endings, provided, of course, that these new endings were to be found in the index at all. But once a shorter stem had been recognized by a successful test of that kind, it was not allowed to be lengthened again as a result of a comparison with yet another form, because that would mean an obvious mixing of two paradigms.The index served its third purpose when giving entry to the so-called beta-list, where the possible grammatical labels were given for each ending. The beta-list was consulted when one of the tested forms, or both, was a homograph and thus "marked" for grammatical category, and so a number of wrong lemmatizations could be prevented through the demand for grammatical compatibility. The beta-list was also used in the subprogram of automatic attributing of head forms and grammatical labels to all the lemmas, which will not be reported here.The main course of the procedure is shown in the flow chart. Several improvements were suggested by our programmer, Rolf Gavare, who wrote the program in DATASAAB/ALGOL-GENIUS and DAC.Some measures were taken to compress the lists. One of these made use of the structure of the Swedish inflectional system, where the ending -s plays a unique role. It always occupies the last position in the form, and it can be added to practically every form of nouns, adjectives, and verbs, having either a genitive or a passive function. If all those s-variants of the endings had been accounted for in the normal way, it would have meant nearly a doubling of the index and a considerable enlarging of the alpha-list. Instead, all forms ending with an -s were treated as if the -s wasn't there, except those where the -s belonged to the stem and which could be readily sorted out, as they were homographs intemaUy with their own genitives and thus had a special "marking."There were also quite a few ad h o c measures taken to obtain a better result, as several minor defects could be foreseen during the construction of the lists and by scrutinizing the result of test computations. Some of the measures simply meant omitting Computers and the Humanities/Vol. 6, No. 4jMarch 1972 an item from one of the lists, thereby repladng a number of wrong lemmatizations by a smaller number of missing correct ones. A measure of a different kind worth mentioning was the rearrangement of the alphabetically ordered material so that out of two homographs, one noun and one verb, the verb was placed before the noun. That saved a fair number of lemmas from going wrong.The lemmatization yielded about 71,000 letmnas in all. The figure reveals that a large number of lemmas appeared in only one form. These lemmas did not cause any special troubles to the program, as a projected lemma could often be fmi^ed after its first form had been compared to-and shown too little similarity to-its nearest neighbor in the alphabetical order. A different subprogram had to be designed, though, for the attribution of head forms and grammatical labels (see above), as the beta-list gave no information in this case, where no boundary between stem and ending had been definitely established.Though the v^ole corpus was treated in the manner now described, not all lemmas could, of course, be made to come out conectly from the computer. The program would have been hopelessly dow and complex if it had had to account for strong verbs, regular though they might be. There were also very rare paradigms that would have done more harm than good if they had been brought into the lists. In fact, the accomplished wrong lemmatizations are more notable than the missing correct ones. Not all clashes could be prevented by the above-mentioned ad hoc measures. And as the material also contained foreign words occurring in the newspaper corpus, there appeared a number of ridiculous lemmas, such as the one consisting of (English)/air and (French)/aire.The manual check of the computer output showed that 3.S percent of the forms were in the wrong i^ce and had to be moved. As this check was done with relative ease, the lemmatization program may well be said to have saved us from a considerable amount of dull routine work. Still, it could be asked whether the automatic procedure has actually been optimized. The number of wrong lemmatizations indicates that the alpha-list didn't have a sufficient discriminating function. This is actually natural for Swedish, where some sequences are very common as endings in different functions: the ending -er, for instance, occurs in 12 paradigms and is compatible with 29 other endings.In dosing, I wfll give a brief account of an alternative solution that I outlined after the computing of our material had been accomplished. In this solution, the ideas of alphabetical procedure and of an index of possible endings are taken over from the system used. But the alpha-and beta-lists are replaced by what could be called the gamma-list. That is, for each ending information is now given about which paradigms this ending can occur in, the paradigms having numbers from 1 to S3. For two forms to be brought together, it is now required that they have an identical stem and at least one paradigm number in common. If the common number or numbers are stored, a third form can be tested against them, and that means that any new tentative form wfll be tested against all the previously accepted forms in the lemma, which wasn't possible in the system used.The beta-list is made superfluousby the grammatical labels being brought into the index and assigned paradigm numbers. So when two forms are tested, one of which is a homograph, it is required that at least one number occur three times in the gamma-list: with the two endings and with the grammatical label.Most of the measures taken to improve the system used can be kept, as for instance the special treatment of forms ending in inflectional -s. Though the alternative solution hasn't been tested on the material, it seems fairly clear that it would have surpassed the one we chose. Of 12 different kinds of clashes that had been registered before the new system was developed, seven would have been avoided. What this would mean in figures is harder to guess. A reduction of the number of wrong lemmatizations by one-half is perhaps a somewhat too optimistic estimation.Proceedings of NODALIDA 1979Proceedings of NODALIDA 1979 | null | null | null | null | Main paper:
:
Computerized Lemmatization without the Use of a Dictionary: A Case Study from Swedish LexicologyJ_^emniatizatjon, i.e., the bringing together of the inflectional forms (and variant forms) of a word under one heading, is one of the problems when making a frequency dictionary out of a large text corpus with the aid of a computer. Attempts have generally gone in the direction of confronting the material with an ordinary dictionary, presupposing that this dictionary would have an entry for practically every form in the corpus. This may be true for some texts, e.g., the classical ones, but it is definitely not true for a newspaper text corpus in a language like Swedish, which not only shows brand-new loan-words but is abundant in compounds of the more or less casual sort that will never appear in ordinary dictionaries. So the task we undertook in 1969-70 at the Research Group for Modern Swedish, University of Göteborg, was to lenunatize automatically about 112,000 different word forms without direct access to any existing dictionary. Homographs had been previously separated with the aid of a K W ic-index (the original number of different graphic words was about 103,000), a fact which meant that about one-third of the forms had been assigned grammatical information (word class and, roughly, gender or conjugation).'As Swedish contains no inflectional prefixes, the procedure can operate with an alphabetically sorted version of the material. The computer passes through that version, successively grouping the forms into lemmas and printing them out, so that the whole lemmatization can be checked manually afterwards.The program tests the first form in the projected lenuna against the ones following alphabetically, one at a time, providing they have not already been included in a previously finished lemma. As soon as a form appears which is not identical with the first form as far as the stem of the latter goes, the testing is stopped and the lemma finished. The form that heads the remainder of forms alphabetically is then chosen as the first form of the next lemma, and the procedure is repeated. Computen and the Humanities/Vol. 6, No. 4/March 1972 The "stem" of a lemma thus had to be defined as the part of the word that was identical in all its inflectional forms. The remnants of the forms were called "endings." Obviously, these definitions don't altogether coincide with the usual linguistic ones: the word titel, 'title,* plural titbr, for instance, got the stem titand the endings -el,-lar, etc., though linguistically, the stem should rather be tilland the plural ending -cr. An index was set up of those graphic sequences that might be endings in regular paradigms.^ Lexical regularity proved not to be the same thing as grammatical regularity; for instance, an irregular noun occurring as the latter element in many compounds had to be taken account of. The word man 'man,' plural män, is as irregubr as in English, but it appeared in over 1 SO compounds in the corpus, e.g., adelsman, 'nobleman,' plural adelsmän, and so a paradigm -an, -än, etc. was established. In all, S3 different paradigms were made the basis of the index, which contained 98 different endings. The flgures give a somewhat exaggerated idea of the complexity of Swedish morphology, as one linguistic paradigm often had to be split into two or more paradigms here: compare titel, titbr (above) with the endings -el, -br, etc. to stol, 'chair,' plural stobr with the endings -o, -or, etc.For two forms to be brought into the same lermna, they were required to have an identical stem and compatible endings, i.e., such as could belong to the same paradigm. Whether the identity actually covered the whole stem was decided by checking whether the remnants of the forms were possible endings. So the index here served two purposes: to identify the latter parts of the forms as endings, and to give access to what was called the alpha-list, where for each ending the endings compatible with it were stored. But for the former procedure to function properly, it was necessary that every graph or graphic sequence Y which could not itself be an ending but which had a counterpart X Y that was a possible ending appear in the index, where it was stored as a pseudo-ending with an empty alpha-row. An example is the final -/ which didn't occur in any paradigm, while the sequence -el did (see titel, titbr above). In all, 14 pseudo-endings were required.If the alpha test gave a negative result, it was repeated with the rightmost graph (roughly: letter) of the stem brought over to the endings, provided, of course, that these new endings were to be found in the index at all. But once a shorter stem had been recognized by a successful test of that kind, it was not allowed to be lengthened again as a result of a comparison with yet another form, because that would mean an obvious mixing of two paradigms.The index served its third purpose when giving entry to the so-called beta-list, where the possible grammatical labels were given for each ending. The beta-list was consulted when one of the tested forms, or both, was a homograph and thus "marked" for grammatical category, and so a number of wrong lemmatizations could be prevented through the demand for grammatical compatibility. The beta-list was also used in the subprogram of automatic attributing of head forms and grammatical labels to all the lemmas, which will not be reported here.The main course of the procedure is shown in the flow chart. Several improvements were suggested by our programmer, Rolf Gavare, who wrote the program in DATASAAB/ALGOL-GENIUS and DAC.Some measures were taken to compress the lists. One of these made use of the structure of the Swedish inflectional system, where the ending -s plays a unique role. It always occupies the last position in the form, and it can be added to practically every form of nouns, adjectives, and verbs, having either a genitive or a passive function. If all those s-variants of the endings had been accounted for in the normal way, it would have meant nearly a doubling of the index and a considerable enlarging of the alpha-list. Instead, all forms ending with an -s were treated as if the -s wasn't there, except those where the -s belonged to the stem and which could be readily sorted out, as they were homographs intemaUy with their own genitives and thus had a special "marking."There were also quite a few ad h o c measures taken to obtain a better result, as several minor defects could be foreseen during the construction of the lists and by scrutinizing the result of test computations. Some of the measures simply meant omitting Computers and the Humanities/Vol. 6, No. 4jMarch 1972 an item from one of the lists, thereby repladng a number of wrong lemmatizations by a smaller number of missing correct ones. A measure of a different kind worth mentioning was the rearrangement of the alphabetically ordered material so that out of two homographs, one noun and one verb, the verb was placed before the noun. That saved a fair number of lemmas from going wrong.The lemmatization yielded about 71,000 letmnas in all. The figure reveals that a large number of lemmas appeared in only one form. These lemmas did not cause any special troubles to the program, as a projected lemma could often be fmi^ed after its first form had been compared to-and shown too little similarity to-its nearest neighbor in the alphabetical order. A different subprogram had to be designed, though, for the attribution of head forms and grammatical labels (see above), as the beta-list gave no information in this case, where no boundary between stem and ending had been definitely established.Though the v^ole corpus was treated in the manner now described, not all lemmas could, of course, be made to come out conectly from the computer. The program would have been hopelessly dow and complex if it had had to account for strong verbs, regular though they might be. There were also very rare paradigms that would have done more harm than good if they had been brought into the lists. In fact, the accomplished wrong lemmatizations are more notable than the missing correct ones. Not all clashes could be prevented by the above-mentioned ad hoc measures. And as the material also contained foreign words occurring in the newspaper corpus, there appeared a number of ridiculous lemmas, such as the one consisting of (English)/air and (French)/aire.The manual check of the computer output showed that 3.S percent of the forms were in the wrong i^ce and had to be moved. As this check was done with relative ease, the lemmatization program may well be said to have saved us from a considerable amount of dull routine work. Still, it could be asked whether the automatic procedure has actually been optimized. The number of wrong lemmatizations indicates that the alpha-list didn't have a sufficient discriminating function. This is actually natural for Swedish, where some sequences are very common as endings in different functions: the ending -er, for instance, occurs in 12 paradigms and is compatible with 29 other endings.In dosing, I wfll give a brief account of an alternative solution that I outlined after the computing of our material had been accomplished. In this solution, the ideas of alphabetical procedure and of an index of possible endings are taken over from the system used. But the alpha-and beta-lists are replaced by what could be called the gamma-list. That is, for each ending information is now given about which paradigms this ending can occur in, the paradigms having numbers from 1 to S3. For two forms to be brought together, it is now required that they have an identical stem and at least one paradigm number in common. If the common number or numbers are stored, a third form can be tested against them, and that means that any new tentative form wfll be tested against all the previously accepted forms in the lemma, which wasn't possible in the system used.The beta-list is made superfluousby the grammatical labels being brought into the index and assigned paradigm numbers. So when two forms are tested, one of which is a homograph, it is required that at least one number occur three times in the gamma-list: with the two endings and with the grammatical label.Most of the measures taken to improve the system used can be kept, as for instance the special treatment of forms ending in inflectional -s. Though the alternative solution hasn't been tested on the material, it seems fairly clear that it would have surpassed the one we chose. Of 12 different kinds of clashes that had been registered before the new system was developed, seven would have been avoided. What this would mean in figures is harder to guess. A reduction of the number of wrong lemmatizations by one-half is perhaps a somewhat too optimistic estimation.Proceedings of NODALIDA 1979Proceedings of NODALIDA 1979
Appendix:
| null | null | null | null | {
"paperhash": [],
"title": [],
"abstract": [],
"authors": [],
"arxiv_id": [],
"s2_corpus_id": [],
"intents": [],
"isInfluential": []
} | null | 544 | 0 | null | null | null | null | null | null | null | null |
ee234b8ea703928bca1ed58ceccbe086b2ff8169 | 20975332 | null | Produksjon av en lemmatisert konkordans til Ibsens samlede verker (Production of a lemmatized concordance of the collected works of Ibsen) [In {N}orwegian] | Tekstgrunnlaget er hundreårsutgaven av Ibsens verk og består av 28 skuespill derav 4 i to utgaver og et bind med dikt. Dette ut gjør ca 3/4 million løpende ord. Publiseringsmåten for konkordansen vil bli på mikrokort og det vil også bli aktuelt med en trykket utgave av en del av materialet, et sitatleksikon. Ved tilrettelegging av teksten er det ført inn opplysninger om hvem en replikk er rettet til, type av scenehenvisning og markering av enderim etter type, der skuespill er på verseform. | {
"name": [
"Hofland, Knut"
],
"affiliation": [
null
]
} | null | null | Proceedings of the 2nd Nordic Conference of Computational Linguistics ({NODALIDA} 1979) | 1979-10-01 | 0 | 0 | null | Ibsen skifter rettskriving i 1870. Ved lemmatiseringen bruker en skrivemåten etter 1870 som grunnlag for oppslagsformen. Som en hjelp til brukeren av konkordansen (saerlig i utlandet) innføres det ett sett med henvisninger.Dette gjelder fra moderne norsk rettskriving til Ibsens, fra Ibsen før 1870 til etter 1870 og ved sterk bøying fra ordform til oppslagsform.Det er lagt opp til at hvert belegg kan få en nøye tilpasset kontekst. Deler av konteksten kan utelates og det kan også tilføyes opplysninger som f.eks. henvisning til pronomene. I tillegg til referansen vil det for hvert belegg vaere gitt opplysninger om hvem som har replikken og til hvem den er rettet.Tekstene lemmatiseres verk for v e r k . Den første teksten ble lemmatisert manuelt ut i fra en konkordans med 3 linjers kontekst.Det var her forslag til kontekst avgrensing basert på skilletegn.Ved de to neste verkene ble opplysningene fra det/de foregående verk stilt opp som forslag (se eksempel). ■3 9 a a a a a a a a V 0 -5j > l i »Si t o ft«j V Vb A » » • • f t f t f t f t I 1 • A f t f t f t U • f t a a 4- V « O f t z V f t N • < 0 ft 0 u • X s ft « J P I 1 • a N f t O b • c f t ) f t P 4 « r > 9 C > B f t E - z 1 z c 1 f t E •« 4 ' s . * 9 f t ) ft « M • H 9 V - • e f t 9 a 1 4J i n V f t < 9 9 r - 1 a f t f ft • c M cn f t B « J k • « • f t f - F - u f t . f t ' 1 9 S « 4* » ft \ o • f t u f t 9 9 a • \ V ft n u U f t a f t 9 • • o f t ■ 4 4 f t N O E O ' 3 • 9 • • w . f t * J A u f t a 9 -H O • ■ f t V s . V w f t • V C l • 9 ID f t a V 0 f t E 9 9 • N *4 •0 f t 6 1 O J f t r 9 M U U •H • O ' B f t u ( f l V ) 3 M f t a a V 4' « > • 4P IH f t ( / ) 1' 9 f t O ' a ■ f t 9 e V f t O P I 3 • e 1 E B C ft • 4* 9 O' u *0 ft ** U ft E 9 4H •o E a ft 4P T^9 fc « ft U (d O ft B 9 1 9 U ■ ft Pi o> T S c •*4 > ft 9 o > ft c 3 U 1 9 ft fr H ft a Ö « U S ft T * 9 O' n ft a u u ft c > 1 c ft E V 9 o ft a. 9 1 ^f t PT O f a9 OL ■c > • X » ft •- a M ft • l *o O' O' ft d II ■ H z ft t/i ft on < 1 1 •H ft. .44 ft c B 0 9 9 u 1 ft 9 ft V J U) z < o 1 «.i z 1 w. ft 3 Z (A 9 X ft > % ft V u • - u u 9 ft 1 Q. (A t 1 ft O ft £ a - p. «4i 9 X o V I C ft X ft 1 9 » 3 V.r c 9 4. ft > a o V 9 U L ft f " d U ft • • O' d l i o 9 a ■ 44 ft 44 E >4 z o V ft o 9 9 ft X o V ft c. • 0 (_ • • r a ft c c .• « . N. ■ 44 > < 1 O' t , O' ft c* V ft A 9 4. ft z E 9 0> ft II ■i 9 ft 4^9 m ic z C * cn ft V. T -l f - ft c O' l/l < 1 a ■E •A E* c ft C C Z c. u •0 ft ft •r4 IP u 9 •H ft > D 4 6 c 9 ft C 4 <« a > ft •H n', 4 .> .4^f t u Pi •c c* b c •H > ft > ft 4 ■ > tn . P a 9 X o c. • 4 4 9 c •• •H ft X. « •fH 3 T c 4 p ft E E E - E 4» O ft 9 > u t) r C « U f-l ft II 9 • « c o c d V s 9 VP i c C rs j C M O' ft C C 9> 0 £ ft V c ft* C b PT ft ft *V E 0* 0 ft II C ft 9 II E f t D *-■ a f t •" * C'. = H < N r * a g . W r X C ' f I - o L " r - a o - IN li* : V r X ' CJ- O , r c t - r~( T- T r 4 > r-C '. C l rp r .Proceedings of NODALIDA 1979 | null | null | null | null | Main paper:
:
Ibsen skifter rettskriving i 1870. Ved lemmatiseringen bruker en skrivemåten etter 1870 som grunnlag for oppslagsformen. Som en hjelp til brukeren av konkordansen (saerlig i utlandet) innføres det ett sett med henvisninger.Dette gjelder fra moderne norsk rettskriving til Ibsens, fra Ibsen før 1870 til etter 1870 og ved sterk bøying fra ordform til oppslagsform.Det er lagt opp til at hvert belegg kan få en nøye tilpasset kontekst. Deler av konteksten kan utelates og det kan også tilføyes opplysninger som f.eks. henvisning til pronomene. I tillegg til referansen vil det for hvert belegg vaere gitt opplysninger om hvem som har replikken og til hvem den er rettet.Tekstene lemmatiseres verk for v e r k . Den første teksten ble lemmatisert manuelt ut i fra en konkordans med 3 linjers kontekst.Det var her forslag til kontekst avgrensing basert på skilletegn.Ved de to neste verkene ble opplysningene fra det/de foregående verk stilt opp som forslag (se eksempel). ■3 9 a a a a a a a a V 0 -5j > l i »Si t o ft«j V Vb A » » • • f t f t f t f t I 1 • A f t f t f t U • f t a a 4- V « O f t z V f t N • < 0 ft 0 u • X s ft « J P I 1 • a N f t O b • c f t ) f t P 4 « r > 9 C > B f t E - z 1 z c 1 f t E •« 4 ' s . * 9 f t ) ft « M • H 9 V - • e f t 9 a 1 4J i n V f t < 9 9 r - 1 a f t f ft • c M cn f t B « J k • « • f t f - F - u f t . f t ' 1 9 S « 4* » ft \ o • f t u f t 9 9 a • \ V ft n u U f t a f t 9 • • o f t ■ 4 4 f t N O E O ' 3 • 9 • • w . f t * J A u f t a 9 -H O • ■ f t V s . V w f t • V C l • 9 ID f t a V 0 f t E 9 9 • N *4 •0 f t 6 1 O J f t r 9 M U U •H • O ' B f t u ( f l V ) 3 M f t a a V 4' « > • 4P IH f t ( / ) 1' 9 f t O ' a ■ f t 9 e V f t O P I 3 • e 1 E B C ft • 4* 9 O' u *0 ft ** U ft E 9 4H •o E a ft 4P T^9 fc « ft U (d O ft B 9 1 9 U ■ ft Pi o> T S c •*4 > ft 9 o > ft c 3 U 1 9 ft fr H ft a Ö « U S ft T * 9 O' n ft a u u ft c > 1 c ft E V 9 o ft a. 9 1 ^f t PT O f a9 OL ■c > • X » ft •- a M ft • l *o O' O' ft d II ■ H z ft t/i ft on < 1 1 •H ft. .44 ft c B 0 9 9 u 1 ft 9 ft V J U) z < o 1 «.i z 1 w. ft 3 Z (A 9 X ft > % ft V u • - u u 9 ft 1 Q. (A t 1 ft O ft £ a - p. «4i 9 X o V I C ft X ft 1 9 » 3 V.r c 9 4. ft > a o V 9 U L ft f " d U ft • • O' d l i o 9 a ■ 44 ft 44 E >4 z o V ft o 9 9 ft X o V ft c. • 0 (_ • • r a ft c c .• « . N. ■ 44 > < 1 O' t , O' ft c* V ft A 9 4. ft z E 9 0> ft II ■i 9 ft 4^9 m ic z C * cn ft V. T -l f - ft c O' l/l < 1 a ■E •A E* c ft C C Z c. u •0 ft ft •r4 IP u 9 •H ft > D 4 6 c 9 ft C 4 <« a > ft •H n', 4 .> .4^f t u Pi •c c* b c •H > ft > ft 4 ■ > tn . P a 9 X o c. • 4 4 9 c •• •H ft X. « •fH 3 T c 4 p ft E E E - E 4» O ft 9 > u t) r C « U f-l ft II 9 • « c o c d V s 9 VP i c C rs j C M O' ft C C 9> 0 £ ft V c ft* C b PT ft ft *V E 0* 0 ft II C ft 9 II E f t D *-■ a f t •" * C'. = H < N r * a g . W r X C ' f I - o L " r - a o - IN li* : V r X ' CJ- O , r c t - r~( T- T r 4 > r-C '. C l rp r .Proceedings of NODALIDA 1979
Appendix:
| null | null | null | null | {
"paperhash": [],
"title": [],
"abstract": [],
"authors": [],
"arxiv_id": [],
"s2_corpus_id": [],
"intents": [],
"isInfluential": []
} | null | 544 | 0 | null | null | null | null | null | null | null | null |
59fafe177190d4c178e0eddcafcd74bc8e3711b7 | 35078891 | null | Lemmatisering {--} hvilke af de ideelle krav til lemmatisering er opfyldelige eller opfyldte? (Lemmatization {--} which of the ideal requirements of lemmatization are fulfillable or fulfilled?) [In {D}anish] | Lemmatisering -hvilke iif de ideelle krav til leimnatisering er opfyldelige eller oj^fyldte? Lemmatisering er en term, der er kurant i snaevre kredse, hvor termen anvendes på en måde, der vel ikke er entydig, men dog har et centralt betydningsområde, som alle er enige om hører med til termen. Hvis man konsulterer en raekke gaengse lingvistiske terminologiske ordbøger eller oversigtsvaerker, konstaterer man imidler tid, at termen ikke er optaget og defineret i disse vaerker. At lemmatisere betyder minimalt at henføre et ord fra en tekst til en bestemt type eller kategori, som det i teksten aktuedt forekommende oiai kan jjåslås al va're en bøjet form af. Delle f (jiiuls.i'l ler en analyse af ordet og eventuelt dets omgivelser i teksten, men behøver ikke at forudsaette informationer, der ligger uden for ordet og teksten selv. | {
"name": [
"Holmboe, Henrik"
],
"affiliation": [
null
]
} | null | null | Proceedings of the 2nd Nordic Conference of Computational Linguistics ({NODALIDA} 1979) | 1979-10-01 | 0 | 0 | null | null | null | null | null | 1 ... Proceedings of NODALIDA 1979 | Main paper:
:
1 ... Proceedings of NODALIDA 1979
Appendix:
| null | null | null | null | {
"paperhash": [],
"title": [],
"abstract": [],
"authors": [],
"arxiv_id": [],
"s2_corpus_id": [],
"intents": [],
"isInfluential": []
} | null | 544 | 0 | null | null | null | null | null | null | null | null |
f17f7a0721b99a9c1a59abf6ea7c10240a616efb | 1585824 | null | Norsk tekstarkiv (The {N}orwegian Text Archive) [In {N}orwegian] | Datamaskinell språkbehandling er kanskje det feltet som har stått mest sentralt siden datamaskinene gjorde sitt inntog i de humanistiske fag på 60-tallet i Norge. Ved Nordisk institutt. Universitetet i Bergen ble det tidlig | {
"name": [
"Hauge, Jostein"
],
"affiliation": [
null
]
} | null | null | Proceedings of the 2nd Nordic Conference of Computational Linguistics ({NODALIDA} 1979) | 1979-10-01 | 0 | 0 | null | null | null | null | null | null | Main paper:
Appendix:
| null | null | null | null | {
"paperhash": [],
"title": [],
"abstract": [],
"authors": [],
"arxiv_id": [],
"s2_corpus_id": [],
"intents": [],
"isInfluential": []
} | null | 544 | 0 | null | null | null | null | null | null | null | null |
664b5099f3d0ddf218d2533dc9f080178a6ae08d | 1784394 | null | Projektet Engelskt talspr{\aa}k (The Survey of Spoken {E}nglish Project) [In {S}wedish] | Projektet Engelskt talspråk i Lund, eller som det heter på engelska. Survey of Spoken English (SSE), är ett dotterpro jekt till Survey of English Usage vid University College London. Projektet stöds sedan 1975 av Riksbankens Jubileums fond. Projektledare är Professor Jan Svartvik. MATERIAL ) Projektets material är den s.k. 'London-Lund Corpus'. Den be står av ca 1/2 miljon ord av talad engelska i ett flertal olika situationer. Materialet är inspelat, ortografiskt transkriberat, och prosodiskt och paralingvistiskt analyserat under ledning av Professor Randolph Quirk vid University Col lege i London. De s.k. texterna av engelskt talspråk består av följande kategorier: Material with origin in speech (100 "texts") A Monologue (24) Prepared (but unscripted) oration 6 | {
"name": [
"Thavenius, Cecilia"
],
"affiliation": [
null
]
} | null | null | Proceedings of the 2nd Nordic Conference of Computational Linguistics ({NODALIDA} 1979) | 1979-10-01 | 0 | 0 | null | null | Proceedings of NODALIDA 1979 | null | null | null | Main paper:
:
Proceedings of NODALIDA 1979
Appendix:
| null | null | null | null | {
"paperhash": [],
"title": [],
"abstract": [],
"authors": [],
"arxiv_id": [],
"s2_corpus_id": [],
"intents": [],
"isInfluential": []
} | null | 544 | 0 | null | null | null | null | null | null | null | null |
bed0a188e910542124a986369c8193a4439fd29a | 29611640 | null | The Parameters of Conversational Style | There are several dimensions along which verbalization responds to context, resulting in individual and social differences in conversational style. Style, as I use the term, is not something extra added on, like decoration. Anything that is said must be said in some way; co-occurrence expectations of that "way" constitute style. The dimensions of style I will discuss are: I. Fixity vs. novelty 2. Cohesiveness vs. expressiveness 3. Focus on content vs. interpersonal involvement. | {
"name": [
"Tannen, Deborah"
],
"affiliation": [
null
]
} | null | null | 18th Annual Meeting of the Association for Computational Linguistics | 1980-06-01 | 10 | 6 | null | null | null | null | Fixity vs. novelty Any utterance or sequence must be identified (rightly or wrongly, in terms of interlocuter's intentions) with a recognizable frame, as it conforms more or less to a familiar pattern. Every utterance and interaction is formulaic, or conventionalized, to some degree. There is a continuum of formulaicness from utterly fixed strings of words (situational formulas: "Happy birthday," "Welcome home," "Gezundheit") and strings of events (rituals), to new ideas and acts put together in a new way. Of course, the latter does not exist except as an idealization. Even the most novel utterance is to some extent formulaic, as it must use familiar words (witness the absurdity of Humpty Dumpty's assertion that when he uses a word it means whatever he wants it to mean, and notice that he chooses to exercise this license with only one word); syntax (again Lewis Carroll is instructive: the "comprehensibility" of Jabberwocky); intonation; coherence principles (cf Alton Becker); and content (Mills' "vocabularies of motives," e.g.). All these are limited by social convention. Familiarity with the patterns is necessary for the signalling of meaning both as prescribed and agreed upon, and as cued by departure from the pattern (cf Hymes).For example, a situational formula is a handy way to signal familiar meaning, but if the formula is not known the meaning may be lost entirely, as when a Greek says to an American cook, "Health to your hands." If meaning is not entirely lost, at least a level of resonance is lost, when reference is implicit to a fixed pattern which is unfamiliar to the interlocutor. For example, when living in Greece and discussing the merits of buying an icebox with a Greek Friend, I asked, "Doesn't the iceman cometh?" After giggling alone in the face of his puzzled look, I ended up feeling I hadn't communicated at all. Indeed I hadn't.Cohesiveness vs. expressiveness This is the basic linguistic concept of markedness and is in a sense another facet of the above distinction. What is prescribed by the pattern for a given context, and what is furnished by the speaker for this instance? To what extent is language being used to signal "business as usual," as opposed to signalling, "Hey, look at this!" This distinction shows up on every level of verbalization too: lexical choice, pitch and amplitude, prosody, content, genre, and so on. For example, if someone uses an expletive, is this a sign of intense anger or is it her/his usual way of talking? If they reveal a personal experience or feeling, is that evidence that you are a special friend, or do they talk that way to everybody? Is overlap a way of trying to take the floor away from you or is it their way of showing interest in what you're saying? Of course, ways of signalling special meaning --expressiveness --are also prescribed by cultural convention, as the work of John Gumperz shows. The need to distinguish between individual and social differences is thus intertwined with the need to distinguish between cohesive and ex-pressive intentions. One more example will be presented, based on spontaneous conversation taped during Thanks-• giving dinner, among native speakers of English from different ethnic and geographic backgrounds.In responding to stories and comments told by speakers from Los Angeles of Anglican/Irish background, speakers of New York Jewish background often uttered paralinguistically gross sounds and phrases ("WHAT!? .... How INTeresting! .... You're KIDding! .... Ewwwwww!"). In this context, these "exaggerated" responses had the effect of stopping conversational flow. In contrast, when similar responses were uttered while listening to stories and comments by speakers of similar background, they had the effect of greasing the conversational wheels, encouraging conversation. Based on the rhythm and content of the speakers' talk, as well as their discussion during playback (i.e. listening to the tape afterwards), I could hypothesize that for the New Yorkers such "expressive" responses are considered business as usual; an enthusiasm constraint is operating, whereby a certain amount of expressiveness is expected to show interest. It is a cohesive device, a conventionally accepted way of having conversation. In contrast, such responses were unexpected to the Californians and therefore were taken by them to signal, "Hold it! There's something wrong here." Consequently, they stopped and waited to find out what was wrong. Of course such differences have interesting implications for the ongoing interaction, but what is at issue here is the contrast between the cohesive and expressive use of the feature.Focus on content vs. interpersonal involvement Any utterance is at the same time a statementof content (Bateson's 'message') and a statement about the relationship between interlocutors ('metamessage'). In other words, there is what I am saying, but also what it means that I am saying this in this way to this person at this time. In interaction, talk can recognize, more or less explicitly and more or less emphatically (these are different), the involvement between interlocutors. It has been suggested that the notion that meaning can stand alone, that only content is going on, is associated with literacy, with printed text. But certainly relative focus on content or on interpersonal involvement can be found in either written or spoken Form. I suspect, for example, that one of the reasons many people find interaction at scholarly conferences difficult and stressful is the conventional recognition of only the content level, whereas in fact there is a lot of involvemerit among people and between the people and the content. Whereas the asking of a question following a paper is conventionally a matter of exchange of information, in fact it is also a matter of presentation of self, as Goffman has demonstrated for all forms of behavior.A reverse, phenomenon has been articulated by Gall Dreyfuss. The reason many people feel uncomfortable, if not scornful, about encounter group talk and "psychobabble" is that it makes explicit information about relationships which people are used to signalling on the meta level. Kay (1977) calls "autonomous" language, wherein maximal meaning is encoded lexically, as opposed to signalling it through use of paralinguistic and nonlinguistic channels, and wherein maximal background information is furnished, as opposed to assuming it is already known as a consequence of sharedexperience. Of course this is an idealization as well, as no meaning at all could be communicated if there were no common experience, as Fillmore (197g) amply demonstrates. It ~s crucial, then, to know the operative conventions. As much of my own early work shows, a hint {i.e. indirect communication) can be missed if a listener is unaware that the speaker defines the context as one in which hints are appropriate. What is intended as relatively direct communication can be taken to mean f r more, or simply other, than what is meanS if the listener is unaware that the speaker defines the context as one'in which hints are inappropriate. A common example seems to be communication between intimates in which one partner, typically the female, assumes, "We know each other so well that you will know what I mean without my saying it outright; all I need do is hint"; while the other partner, typically the male, assumes, "We know each other so well that you will tell me what you want."Furthermore, there are various ways of honoring inter-~ersonal involvement, as service of two overriding human goals. These have been called, by Brown and Levinson (1978}, positive and negative politeness, building on R. Lakoff's stylistic continuum from camaraderie to distance (1973) and Goffman's presentational and avoidance rituals (1967). These and other schemata recognize the universal human needs to l) be connected to other people and 2) be left alone. Put another way, there are universal, simultaneous, and conflicting human needs for community and independence.Linguistic choices reflect service of one or the other of these needs in various ways. The paralinguistically gross listener responses mentioned above are features in an array of devices which I have hypothesized place the signalling load (Gumperz' term) on the need for community. Other features co-occurring in the speech of many speakers of this style include fast rate of speech; fast turn-taking; preference for simultaneous speech; tendency to introduce new topics without testing the conversational waters through hesitation and other signals; persistence in introducing topics not picked up by others; storytelling; preference for stories told about personal experience and revealing emotional reaction of teller;'talk about personal matters; overstatement for effect. (All of these features surfaced in the setting of a casual conversation at dinner; it would be premature to generalize for other settings). These and other features of the speech of the New Yorkers sometimes struck the Californians present as imposing, hence failing to honor their need for independence. The use of contrasting devices by the Californians led to the impression on some of the New Yorkers that they were deficient in honoring the need for community. Of course the underlying goals were not conceptualized by participants at the time. What was perceived was sensed as personality characteristics: "They're dominating," and "They're cold." Conversely, when style was shared, the conclusion was, "They're nice." Perhaps many of these stylistic differences come down to differing attitudes toward silence. I suggest that the fast-talking style I have characterized above grows out of a desire to avoid silence, which has a negative value. Put another way, the unmarked meaning of silence, in this system, is evidence of lack of rapport. To other speakers --for example, Athabaskan Indians, according to Basso (1972) and Scollon (1980) --the unmarked meaning of silence is positive.Individual and social differences All of these parameters are intended to suggest processes that operate in signalling meaning in conversation. Analys'is of cross-cultural differences is useful to make apparent processes that go unnoticed when signalling systems are shared.An obvious question, one that has been indirectly addressed throughout the present discussion, confronts the distinction between individual and cultural differences. We need to know, for the understanding of our own lives as much as for our theoretical understanding of discourse, how much of any speaker's style --the linguistic and paralinguistic devices signal)ing meaning --are prescribed by the culture, and which are chosen freely. The answer to this seems to resemble, one level further removed, the distinction between cohesive vs. expressive features. The answer, furthermore, must lie somewhere between fixity and novelty --a matter of choices among alternatives offered by cultural convention. | null | Main paper:
:
Fixity vs. novelty Any utterance or sequence must be identified (rightly or wrongly, in terms of interlocuter's intentions) with a recognizable frame, as it conforms more or less to a familiar pattern. Every utterance and interaction is formulaic, or conventionalized, to some degree. There is a continuum of formulaicness from utterly fixed strings of words (situational formulas: "Happy birthday," "Welcome home," "Gezundheit") and strings of events (rituals), to new ideas and acts put together in a new way. Of course, the latter does not exist except as an idealization. Even the most novel utterance is to some extent formulaic, as it must use familiar words (witness the absurdity of Humpty Dumpty's assertion that when he uses a word it means whatever he wants it to mean, and notice that he chooses to exercise this license with only one word); syntax (again Lewis Carroll is instructive: the "comprehensibility" of Jabberwocky); intonation; coherence principles (cf Alton Becker); and content (Mills' "vocabularies of motives," e.g.). All these are limited by social convention. Familiarity with the patterns is necessary for the signalling of meaning both as prescribed and agreed upon, and as cued by departure from the pattern (cf Hymes).For example, a situational formula is a handy way to signal familiar meaning, but if the formula is not known the meaning may be lost entirely, as when a Greek says to an American cook, "Health to your hands." If meaning is not entirely lost, at least a level of resonance is lost, when reference is implicit to a fixed pattern which is unfamiliar to the interlocutor. For example, when living in Greece and discussing the merits of buying an icebox with a Greek Friend, I asked, "Doesn't the iceman cometh?" After giggling alone in the face of his puzzled look, I ended up feeling I hadn't communicated at all. Indeed I hadn't.Cohesiveness vs. expressiveness This is the basic linguistic concept of markedness and is in a sense another facet of the above distinction. What is prescribed by the pattern for a given context, and what is furnished by the speaker for this instance? To what extent is language being used to signal "business as usual," as opposed to signalling, "Hey, look at this!" This distinction shows up on every level of verbalization too: lexical choice, pitch and amplitude, prosody, content, genre, and so on. For example, if someone uses an expletive, is this a sign of intense anger or is it her/his usual way of talking? If they reveal a personal experience or feeling, is that evidence that you are a special friend, or do they talk that way to everybody? Is overlap a way of trying to take the floor away from you or is it their way of showing interest in what you're saying? Of course, ways of signalling special meaning --expressiveness --are also prescribed by cultural convention, as the work of John Gumperz shows. The need to distinguish between individual and social differences is thus intertwined with the need to distinguish between cohesive and ex-pressive intentions. One more example will be presented, based on spontaneous conversation taped during Thanks-• giving dinner, among native speakers of English from different ethnic and geographic backgrounds.In responding to stories and comments told by speakers from Los Angeles of Anglican/Irish background, speakers of New York Jewish background often uttered paralinguistically gross sounds and phrases ("WHAT!? .... How INTeresting! .... You're KIDding! .... Ewwwwww!"). In this context, these "exaggerated" responses had the effect of stopping conversational flow. In contrast, when similar responses were uttered while listening to stories and comments by speakers of similar background, they had the effect of greasing the conversational wheels, encouraging conversation. Based on the rhythm and content of the speakers' talk, as well as their discussion during playback (i.e. listening to the tape afterwards), I could hypothesize that for the New Yorkers such "expressive" responses are considered business as usual; an enthusiasm constraint is operating, whereby a certain amount of expressiveness is expected to show interest. It is a cohesive device, a conventionally accepted way of having conversation. In contrast, such responses were unexpected to the Californians and therefore were taken by them to signal, "Hold it! There's something wrong here." Consequently, they stopped and waited to find out what was wrong. Of course such differences have interesting implications for the ongoing interaction, but what is at issue here is the contrast between the cohesive and expressive use of the feature.Focus on content vs. interpersonal involvement Any utterance is at the same time a statementof content (Bateson's 'message') and a statement about the relationship between interlocutors ('metamessage'). In other words, there is what I am saying, but also what it means that I am saying this in this way to this person at this time. In interaction, talk can recognize, more or less explicitly and more or less emphatically (these are different), the involvement between interlocutors. It has been suggested that the notion that meaning can stand alone, that only content is going on, is associated with literacy, with printed text. But certainly relative focus on content or on interpersonal involvement can be found in either written or spoken Form. I suspect, for example, that one of the reasons many people find interaction at scholarly conferences difficult and stressful is the conventional recognition of only the content level, whereas in fact there is a lot of involvemerit among people and between the people and the content. Whereas the asking of a question following a paper is conventionally a matter of exchange of information, in fact it is also a matter of presentation of self, as Goffman has demonstrated for all forms of behavior.A reverse, phenomenon has been articulated by Gall Dreyfuss. The reason many people feel uncomfortable, if not scornful, about encounter group talk and "psychobabble" is that it makes explicit information about relationships which people are used to signalling on the meta level. Kay (1977) calls "autonomous" language, wherein maximal meaning is encoded lexically, as opposed to signalling it through use of paralinguistic and nonlinguistic channels, and wherein maximal background information is furnished, as opposed to assuming it is already known as a consequence of sharedexperience. Of course this is an idealization as well, as no meaning at all could be communicated if there were no common experience, as Fillmore (197g) amply demonstrates. It ~s crucial, then, to know the operative conventions. As much of my own early work shows, a hint {i.e. indirect communication) can be missed if a listener is unaware that the speaker defines the context as one in which hints are appropriate. What is intended as relatively direct communication can be taken to mean f r more, or simply other, than what is meanS if the listener is unaware that the speaker defines the context as one'in which hints are inappropriate. A common example seems to be communication between intimates in which one partner, typically the female, assumes, "We know each other so well that you will know what I mean without my saying it outright; all I need do is hint"; while the other partner, typically the male, assumes, "We know each other so well that you will tell me what you want."Furthermore, there are various ways of honoring inter-~ersonal involvement, as service of two overriding human goals. These have been called, by Brown and Levinson (1978}, positive and negative politeness, building on R. Lakoff's stylistic continuum from camaraderie to distance (1973) and Goffman's presentational and avoidance rituals (1967). These and other schemata recognize the universal human needs to l) be connected to other people and 2) be left alone. Put another way, there are universal, simultaneous, and conflicting human needs for community and independence.Linguistic choices reflect service of one or the other of these needs in various ways. The paralinguistically gross listener responses mentioned above are features in an array of devices which I have hypothesized place the signalling load (Gumperz' term) on the need for community. Other features co-occurring in the speech of many speakers of this style include fast rate of speech; fast turn-taking; preference for simultaneous speech; tendency to introduce new topics without testing the conversational waters through hesitation and other signals; persistence in introducing topics not picked up by others; storytelling; preference for stories told about personal experience and revealing emotional reaction of teller;'talk about personal matters; overstatement for effect. (All of these features surfaced in the setting of a casual conversation at dinner; it would be premature to generalize for other settings). These and other features of the speech of the New Yorkers sometimes struck the Californians present as imposing, hence failing to honor their need for independence. The use of contrasting devices by the Californians led to the impression on some of the New Yorkers that they were deficient in honoring the need for community. Of course the underlying goals were not conceptualized by participants at the time. What was perceived was sensed as personality characteristics: "They're dominating," and "They're cold." Conversely, when style was shared, the conclusion was, "They're nice." Perhaps many of these stylistic differences come down to differing attitudes toward silence. I suggest that the fast-talking style I have characterized above grows out of a desire to avoid silence, which has a negative value. Put another way, the unmarked meaning of silence, in this system, is evidence of lack of rapport. To other speakers --for example, Athabaskan Indians, according to Basso (1972) and Scollon (1980) --the unmarked meaning of silence is positive.Individual and social differences All of these parameters are intended to suggest processes that operate in signalling meaning in conversation. Analys'is of cross-cultural differences is useful to make apparent processes that go unnoticed when signalling systems are shared.An obvious question, one that has been indirectly addressed throughout the present discussion, confronts the distinction between individual and cultural differences. We need to know, for the understanding of our own lives as much as for our theoretical understanding of discourse, how much of any speaker's style --the linguistic and paralinguistic devices signal)ing meaning --are prescribed by the culture, and which are chosen freely. The answer to this seems to resemble, one level further removed, the distinction between cohesive vs. expressive features. The answer, furthermore, must lie somewhere between fixity and novelty --a matter of choices among alternatives offered by cultural convention.
Appendix:
| null | null | null | null | {
"paperhash": [
"fillmore|innocence:_a_second_idealization_for_linguistics",
"basso|\"to_give_up_on_words\":_silence_in_western_apache_culture"
],
"title": [
"Innocence: A Second Idealization for Linguistics",
"\"To Give up on Words\": Silence in Western Apache Culture"
],
"abstract": [
"The nature of the fit between predictions generated by a theory and the phenomena within its domain can sometimes be assessed only when different sources of explanation can be isolated through one or more idealizations. One such idealization is the simplifying assumption, for the laws of Newtonian mechanics, that the physical bodies whose movements fall within their scope are (or can be treated as) dimensionless particles, not subject to distortion or friction. The empirical laws of elasticity and friction are themselves best formulated against this background idealization.",
"Combining methods from ethnoscience and sociolinguistics, this paper presents an hypothesis to account for why, in certain types of situations, members of Western Apache society refrain from speech. Though cross-cultural data on silence behavior are almost wholly lacking, some evidence has been collected which suggests that this hypothesis may have relevance to other societies as well."
],
"authors": [
{
"name": [
"C. Fillmore"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"K. Basso"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
}
]
}
],
"arxiv_id": [
null,
null
],
"s2_corpus_id": [
"116372826",
"162626381"
],
"intents": [
[],
[
"background"
]
],
"isInfluential": [
false,
false
]
} | null | 536 | 0.011194 | null | null | null | null | null | null | null | null |
fd1615244d1206ae341c4a9fc8a87ff90a32051c | 1743238 | null | Representation of Texts for Information Retrieval | The representation of whole texts is a major concern of the field known as information retrieval (IR), an importaunt aspect of which might more precisely be called 'document retrieval' (DR). The DR situation, with which we will be concerned, is, in general, the following: a. A user, recognizing an information need, presents to an IR mechanism (i.e., a collection of texts, with a set of associated activities for representing, storing, matching, etc.) a request, based upon that need hoping that the mechanism will be able to satisfy that need. | {
"name": [
"Belkin, N. J. and",
"Michell, B. G. and",
"Kuehner, D. G."
],
"affiliation": [
null,
null,
null
]
} | null | null | 18th Annual Meeting of the Association for Computational Linguistics | 1980-06-01 | 1 | 3 | null | a. A user, recognizing an information need, presents to an IR mechanism (i.e., a collection of texts, with a set of associated activities for representing, storing, matching, etc.) a request, based upon that need hoping that the mechanism will be able to satisfy that need.b. The task of the IR mechanism is to present the user with the text(s) that it judges to be most likely to satisfy the user's need, based upon the request.c. The user examines the text(s) and her/his need is satisfied completely or partially or not at all. The user's judgement as to the contribution of each text in satisfying the need establishes that text's usefulness or relevance to the need.Several characteristics of the problem which DR attempts to solve make current IR systems rather different from, say, question-answering systems. One is that the needs which people bring to the system require, in general, responses consisting of documents about the topic or problem rather than specific data, facts, or inferences. Another is that these needs are typically not precisely specifiable, being expressions of an anomaly in the user's state of knowledge.A third is that this is an essentially probabilistic, rather than deterministic situation, and is likely to remain so. And finally, the corpus of documents in many such systems is in the order of millions (of, say, journal articles or abstracts), and the potential needs are, within rather broad subject constraints, unpredictable.The DR situation thus puts certain constraints upon text representation and relaxes others.The major relaxation is that it may not be necessary in such systems to produce representations which are capable of inference.A constraint, on the other hand, is that it is necessary to have representations which ca~ indicate problems that a user cannot her/himself specify, and a matching system whose strategy is to predict which documents might resolve specific anomalies.This strategy can, however, be based on probability of resolution, rat.her than certainty.Finally, because of the large amount of data,. it is desirable that the representation techniques be reasonably simple computationally.Appropriate text representations, given these con-Straints, must necessarily be of whole texts, and probably ought to be themselves whole, unitary structures, rather than lists of atomic elements, each treated separately.They must be capable of representing problems, or needs, as well as expository texts, and they ought to allow for some sort of pattern matching.An obvious general schema within these requirements is a labelled associative network.Our approach to this general problem is strictly problem-oriented.We begin with a representation scheme which we realize is oversimplified, but which stands within the constraints, and test whether it can be progressively modified in response to observed deficiencies, until either the desired level of performance in solving the problem is reached, or the approach is shown to be unworkable.We report here on some lingu/stically-derived modifications to a very simple, but neverthe-less psychologically and linguistically based word-cooccurrence analysis of text [i] The original analysis was applied to two kinds of texts : abstracts of articles representing documents stored by the system, and a set of 'problem statements' representing users' information needs --their anomalous states of knowledge --when they approach the system. The analysis produced graph-like structures, or association maps, of the abstracts and problem statements which were evaluated by the authors of the texts (Figure 2 ) ( Figure 3 ).A method for clustering large files of documents using a clustering algorithm which takes O(n**2) operations (single-link) is proposed. This method is tested on a file of i1,613 doc%unents derived from an operational system.One property of the generated cluster hierarchy (hierarchy con~ection percentage) is examined and it indicates that the hierarchy is similar to those from other test collections.A comparison of clustering times with other methods shows that large files can be cluStered by singlelink in a time at least comparable to various heuristic algorithms which theoretically require fewer operations.In general, the representations were seen as being accurate reflections of the author's state of knowledge or problem; however, the majority of respondents also felt that some concepts were too strongly or weakly comnected, and that important concepts were omitted (Table i) .We think that at least some of these problems arise because the algorithm takes no account of discourse structure.But because the evaluations indicated that the algorithm produces reasonable representations, we ha%~ decided to amend the analytic structure, rather than abandon it completely. Our current modifications to the analysis consist primarily of methods for translating facts about discourse structure into rough equivalents within the word-cooccurrence paradigm. We choose this strategy, rather than attempting a complete and theoretically adequate discourse analysis, in order to incorporate insights about discourse without violating the cost -d volume constraints typical of DR systems.The modi~,cations are designed to recognize such aspects of discourse structure as establishment of topic; "setting of context; summarizing; concept foregrounding; and stylistic variation.Textual characteristics which correspond with these aspects Include discourse-initial and discoursefinal sentences; title words in the text: equivalence relations; and foregrounding devices (Figure 4 ). i. Repeat first and last sentences of the text.These sentences may include the more important concepts, and thus should be more heavily weighted. 2. Repeat first sentence of paragraph after the last sentence.To integrate these sentences more fully into ~he overall structure. 3. Make the title the first and last sentence of the text, or overweight the score for each cO-OCcurrence containing a title word. Concepts in the title are likely to be the most important in the text, yet are unlikely to be used often in the abstract. 4. Hyphenate phrases in the input text (phrases chosen algorithmically) and then either: a. Use the phrase only as a unit equivalent to a single word in the co-occurrence analysis ; or b. use any co-occurrence with either member of the phrase as a co-occurrence with the phrase, rather than the individual word. This is to control for conceptual units, as opposed to conceptual relations. 5. Modify original definition of adjacency, which counted stop-list words, to one which ignores stoplist words. This is to correct for the distortion caused by the distribution of function words in the recognition of multi-word concepts. We have written alternative systems for each of the proposed modifications.In this experiment the original corpus of thirty abstracts (but not the prublem statements) is submitted to all versions of the analysis programs and the results co~ared to the evaluations of the original analysis and to one another.From the comparisons can be determined: the extent to which discourse theory can be translated into these terms; and the relative effectiveness of the various modifications in improving the original representations. | null | null | null | null | Main paper:
:
a. A user, recognizing an information need, presents to an IR mechanism (i.e., a collection of texts, with a set of associated activities for representing, storing, matching, etc.) a request, based upon that need hoping that the mechanism will be able to satisfy that need.b. The task of the IR mechanism is to present the user with the text(s) that it judges to be most likely to satisfy the user's need, based upon the request.c. The user examines the text(s) and her/his need is satisfied completely or partially or not at all. The user's judgement as to the contribution of each text in satisfying the need establishes that text's usefulness or relevance to the need.Several characteristics of the problem which DR attempts to solve make current IR systems rather different from, say, question-answering systems. One is that the needs which people bring to the system require, in general, responses consisting of documents about the topic or problem rather than specific data, facts, or inferences. Another is that these needs are typically not precisely specifiable, being expressions of an anomaly in the user's state of knowledge.A third is that this is an essentially probabilistic, rather than deterministic situation, and is likely to remain so. And finally, the corpus of documents in many such systems is in the order of millions (of, say, journal articles or abstracts), and the potential needs are, within rather broad subject constraints, unpredictable.The DR situation thus puts certain constraints upon text representation and relaxes others.The major relaxation is that it may not be necessary in such systems to produce representations which are capable of inference.A constraint, on the other hand, is that it is necessary to have representations which ca~ indicate problems that a user cannot her/himself specify, and a matching system whose strategy is to predict which documents might resolve specific anomalies.This strategy can, however, be based on probability of resolution, rat.her than certainty.Finally, because of the large amount of data,. it is desirable that the representation techniques be reasonably simple computationally.Appropriate text representations, given these con-Straints, must necessarily be of whole texts, and probably ought to be themselves whole, unitary structures, rather than lists of atomic elements, each treated separately.They must be capable of representing problems, or needs, as well as expository texts, and they ought to allow for some sort of pattern matching.An obvious general schema within these requirements is a labelled associative network.Our approach to this general problem is strictly problem-oriented.We begin with a representation scheme which we realize is oversimplified, but which stands within the constraints, and test whether it can be progressively modified in response to observed deficiencies, until either the desired level of performance in solving the problem is reached, or the approach is shown to be unworkable.We report here on some lingu/stically-derived modifications to a very simple, but neverthe-less psychologically and linguistically based word-cooccurrence analysis of text [i] The original analysis was applied to two kinds of texts : abstracts of articles representing documents stored by the system, and a set of 'problem statements' representing users' information needs --their anomalous states of knowledge --when they approach the system. The analysis produced graph-like structures, or association maps, of the abstracts and problem statements which were evaluated by the authors of the texts (Figure 2 ) ( Figure 3 ).A method for clustering large files of documents using a clustering algorithm which takes O(n**2) operations (single-link) is proposed. This method is tested on a file of i1,613 doc%unents derived from an operational system.One property of the generated cluster hierarchy (hierarchy con~ection percentage) is examined and it indicates that the hierarchy is similar to those from other test collections.A comparison of clustering times with other methods shows that large files can be cluStered by singlelink in a time at least comparable to various heuristic algorithms which theoretically require fewer operations.In general, the representations were seen as being accurate reflections of the author's state of knowledge or problem; however, the majority of respondents also felt that some concepts were too strongly or weakly comnected, and that important concepts were omitted (Table i) .We think that at least some of these problems arise because the algorithm takes no account of discourse structure.But because the evaluations indicated that the algorithm produces reasonable representations, we ha%~ decided to amend the analytic structure, rather than abandon it completely. Our current modifications to the analysis consist primarily of methods for translating facts about discourse structure into rough equivalents within the word-cooccurrence paradigm. We choose this strategy, rather than attempting a complete and theoretically adequate discourse analysis, in order to incorporate insights about discourse without violating the cost -d volume constraints typical of DR systems.The modi~,cations are designed to recognize such aspects of discourse structure as establishment of topic; "setting of context; summarizing; concept foregrounding; and stylistic variation.Textual characteristics which correspond with these aspects Include discourse-initial and discoursefinal sentences; title words in the text: equivalence relations; and foregrounding devices (Figure 4 ). i. Repeat first and last sentences of the text.These sentences may include the more important concepts, and thus should be more heavily weighted. 2. Repeat first sentence of paragraph after the last sentence.To integrate these sentences more fully into ~he overall structure. 3. Make the title the first and last sentence of the text, or overweight the score for each cO-OCcurrence containing a title word. Concepts in the title are likely to be the most important in the text, yet are unlikely to be used often in the abstract. 4. Hyphenate phrases in the input text (phrases chosen algorithmically) and then either: a. Use the phrase only as a unit equivalent to a single word in the co-occurrence analysis ; or b. use any co-occurrence with either member of the phrase as a co-occurrence with the phrase, rather than the individual word. This is to control for conceptual units, as opposed to conceptual relations. 5. Modify original definition of adjacency, which counted stop-list words, to one which ignores stoplist words. This is to correct for the distortion caused by the distribution of function words in the recognition of multi-word concepts. We have written alternative systems for each of the proposed modifications.In this experiment the original corpus of thirty abstracts (but not the prublem statements) is submitted to all versions of the analysis programs and the results co~ared to the evaluations of the original analysis and to one another.From the comparisons can be determined: the extent to which discourse theory can be translated into these terms; and the relative effectiveness of the various modifications in improving the original representations.
Appendix:
| null | null | null | null | {
"paperhash": [],
"title": [],
"abstract": [],
"authors": [],
"arxiv_id": [],
"s2_corpus_id": [],
"intents": [],
"isInfluential": []
} | null | 536 | 0.005597 | null | null | null | null | null | null | null | null |
919985b9adf97a08f29b534c822f2ceff8d3368b | 37012896 | null | What Type of Interaction Is It to Be | For one, like myself, who knows something about human interaction, but next to nothing about computers and human/machine interaction, the most useful role at a meeting such as this is to listen, to hear the troubles of those who work actively in the area, and to respond when some problem comes up for whose solution the practices of human interactants seems relevant. Here, therefore, I will merely mention some areas in which such exchanges may be useful. | {
"name": [
"Schegloff, Emanuel A."
],
"affiliation": [
null
]
} | null | null | 18th Annual Meeting of the Association for Computational Linguistics | 1980-06-01 | 0 | 4 | null | There appear to be two sorts of status for machine/technology under consideration here. In one, the interactants themselves are humans, but the interaction between them is carried by some technology. We have had the telephone for about lO0 years now, and letter writing much longer, so there is a history here; to it are to be added video technology, as in some of the work reported by John Carey, or computers, as in the "computer conferencing" work reported by Hiltz and her colleagues, among others. In the other sort of concern, one or more of the participants in an interaction is to be a computer.Here the issues seem to be: should this participant be designed to approximate a human interactant? What is required to do this? Is what is required possible? l) If we take as a tentative starting point that personperson interaction should tell us what machine-person interaction should be like (as Jerry Hobbs suggests in a useful orienting set of questions he circulated to us), we still need to determine what type of person-person interaction we should consult. It is common to suppose that ordinary conversation is, or should be, the model. But that is but one of a number of "speech-exchange systems" persons use to organize interaction, or to be organized by in it."t~eetings," "debates, .... interviews," and "ceremonies" are vernacular names for other technically specifiable, speech-exchange systems orgainzing person-person interaction. Different types of turn-taking organization are involved in each, and differences in turn-taking organization can have extensive ramifications for the conduct of the interaction, and the sorts of capacities required of the interactants. In the design Qf computer interactants, and in the introduction of technological intermediaries in human-human interaction, the issue remains which type of person-person interaction is aimed for or achieved. For example, in the Pennsylvania video link-up of senior citizen homes, John Carey asks whether the results look more like conversation or like commercial television. But many of details he reports suggests that the form of technological intervention has made what resulted most like a "meeting" speech exchange system.2) The term "interactive" in "interactive program" or in "person/machine interaction" seems to refer to no more than that provision is made for participation by more than one participant. "Interactive" in this sense is not necessarily "interactional," i.e., the determination of at least some aspects of each party's participation by collaboration of the parties. For the "talk" part of person-person interaction, a/the major vehicle for this "interactionality" is the sequential organization of the talk; that is, the construction of units of participation with specific respect to the details of what has preceded, and thereby the sequential position in which a current bit of talk is being done. Included among the relevant aspects of "what has preceded" and "current sequential position" is "temporality," or "real time," though not necessarily measured by conventional chronometry. What are, by commonsense standards, quite tiny bits of silence --two tenths of a second, or less (what we call micro-pauses) --can, and regularly do, have substantial sequential and interactional consequences. The character of the talk after them is regularly different, or is subject to different analysis, interpretation or inference.Although the telephone deprives interactants of visual access to each other, it leaves this "real time" temporality largely unaffected, and with it the integrity of sequential organization. Nearly all the technological interventions I have heard about --whether replacing an interactant, or inserted as a medium between interactants --impacts on this aspect of the exchange of talk. It is one reason for wondering whether retention of ordinary conversation as the target of this enterprise is appropriate. For some of the contemplated innovations, like computer conferencing, exchanges of letters may be a more appropriate past model to study, for there too more than one may "speak" at a time, long lapses may intervene between messages, sequential ordering may be puzzling (as in "Did the letters cross in the mail?") etc.3) Sequential organization has a direct bearing on an issue which must be of continuing concern to workers in this area --that of understanding and misunderstanding. It is the sequential (including temporal) organization of the talk which, in ordinary conversation, provides running evidence to participants that, and how, they have been understood. The devices by which troubles of understanding are addressed (what we call "repair," discussed for computers by Phil Hayes in a recent paper) --requests for repetition or clarification and the like -are only one part of the machinery which is at work. Regularly, in ordinary conversation, a speaker can detect from the produced-to-be-responsive next turn of another s/he has or has been, misunderstood, and can immediately intervene to set matters right. This is a major safeguard of "intersubjectivity," a retention of a sense that the "sa~ thing" is being understood as what is being spoken of. The requirements on interactants to make this work are substantial, but in ordinary conversation, much of the work is carried as a by-product of ordinary sequential organization. The anecodotes I have heard about misunderstandings going undetected for long stretches when computers are the medium, and leading to, or past, the verge of nastiness, suggest that these are real problems to be faced.In all the business of person-person interaction there operates what we call "recipient-design" --the design of the participation by each party by reference to the features (personal and idiosyncratic, or categorial) of the recipient or co-participant. The formal machineries of turn-taking, sequential organization, repair, etc. are always conditioned in their realization on particular occasions and moments by this consideration. I don't know how this enters into plans for computerized interactants, and it remains to be seen how it will enter into the participation of humans dealing with computers. Persons make all sorts of allowances for children, nonnative speakers, animals, the handicapped, etc. But there are other allowances they do not make, indeed that don't present themselves as allowances or allowables. What is involved here is a determination of where the robustness is and where the brittleness, in interacting with persons by computers, for in the areas of robustness it may be that many of the issues I've mentioned may be safely ignored; the people "will understand." Throughout these notes, we are at a very general tevel of discourse. The real pay-offs, however, will come from discussing specifics. For that, interaction will be needed, rather than position papers. | null | null | null | null | Main paper:
:
There appear to be two sorts of status for machine/technology under consideration here. In one, the interactants themselves are humans, but the interaction between them is carried by some technology. We have had the telephone for about lO0 years now, and letter writing much longer, so there is a history here; to it are to be added video technology, as in some of the work reported by John Carey, or computers, as in the "computer conferencing" work reported by Hiltz and her colleagues, among others. In the other sort of concern, one or more of the participants in an interaction is to be a computer.Here the issues seem to be: should this participant be designed to approximate a human interactant? What is required to do this? Is what is required possible? l) If we take as a tentative starting point that personperson interaction should tell us what machine-person interaction should be like (as Jerry Hobbs suggests in a useful orienting set of questions he circulated to us), we still need to determine what type of person-person interaction we should consult. It is common to suppose that ordinary conversation is, or should be, the model. But that is but one of a number of "speech-exchange systems" persons use to organize interaction, or to be organized by in it."t~eetings," "debates, .... interviews," and "ceremonies" are vernacular names for other technically specifiable, speech-exchange systems orgainzing person-person interaction. Different types of turn-taking organization are involved in each, and differences in turn-taking organization can have extensive ramifications for the conduct of the interaction, and the sorts of capacities required of the interactants. In the design Qf computer interactants, and in the introduction of technological intermediaries in human-human interaction, the issue remains which type of person-person interaction is aimed for or achieved. For example, in the Pennsylvania video link-up of senior citizen homes, John Carey asks whether the results look more like conversation or like commercial television. But many of details he reports suggests that the form of technological intervention has made what resulted most like a "meeting" speech exchange system.2) The term "interactive" in "interactive program" or in "person/machine interaction" seems to refer to no more than that provision is made for participation by more than one participant. "Interactive" in this sense is not necessarily "interactional," i.e., the determination of at least some aspects of each party's participation by collaboration of the parties. For the "talk" part of person-person interaction, a/the major vehicle for this "interactionality" is the sequential organization of the talk; that is, the construction of units of participation with specific respect to the details of what has preceded, and thereby the sequential position in which a current bit of talk is being done. Included among the relevant aspects of "what has preceded" and "current sequential position" is "temporality," or "real time," though not necessarily measured by conventional chronometry. What are, by commonsense standards, quite tiny bits of silence --two tenths of a second, or less (what we call micro-pauses) --can, and regularly do, have substantial sequential and interactional consequences. The character of the talk after them is regularly different, or is subject to different analysis, interpretation or inference.Although the telephone deprives interactants of visual access to each other, it leaves this "real time" temporality largely unaffected, and with it the integrity of sequential organization. Nearly all the technological interventions I have heard about --whether replacing an interactant, or inserted as a medium between interactants --impacts on this aspect of the exchange of talk. It is one reason for wondering whether retention of ordinary conversation as the target of this enterprise is appropriate. For some of the contemplated innovations, like computer conferencing, exchanges of letters may be a more appropriate past model to study, for there too more than one may "speak" at a time, long lapses may intervene between messages, sequential ordering may be puzzling (as in "Did the letters cross in the mail?") etc.3) Sequential organization has a direct bearing on an issue which must be of continuing concern to workers in this area --that of understanding and misunderstanding. It is the sequential (including temporal) organization of the talk which, in ordinary conversation, provides running evidence to participants that, and how, they have been understood. The devices by which troubles of understanding are addressed (what we call "repair," discussed for computers by Phil Hayes in a recent paper) --requests for repetition or clarification and the like -are only one part of the machinery which is at work. Regularly, in ordinary conversation, a speaker can detect from the produced-to-be-responsive next turn of another s/he has or has been, misunderstood, and can immediately intervene to set matters right. This is a major safeguard of "intersubjectivity," a retention of a sense that the "sa~ thing" is being understood as what is being spoken of. The requirements on interactants to make this work are substantial, but in ordinary conversation, much of the work is carried as a by-product of ordinary sequential organization. The anecodotes I have heard about misunderstandings going undetected for long stretches when computers are the medium, and leading to, or past, the verge of nastiness, suggest that these are real problems to be faced.In all the business of person-person interaction there operates what we call "recipient-design" --the design of the participation by each party by reference to the features (personal and idiosyncratic, or categorial) of the recipient or co-participant. The formal machineries of turn-taking, sequential organization, repair, etc. are always conditioned in their realization on particular occasions and moments by this consideration. I don't know how this enters into plans for computerized interactants, and it remains to be seen how it will enter into the participation of humans dealing with computers. Persons make all sorts of allowances for children, nonnative speakers, animals, the handicapped, etc. But there are other allowances they do not make, indeed that don't present themselves as allowances or allowables. What is involved here is a determination of where the robustness is and where the brittleness, in interacting with persons by computers, for in the areas of robustness it may be that many of the issues I've mentioned may be safely ignored; the people "will understand." Throughout these notes, we are at a very general tevel of discourse. The real pay-offs, however, will come from discussing specifics. For that, interaction will be needed, rather than position papers.
Appendix:
| null | null | null | null | {
"paperhash": [],
"title": [],
"abstract": [],
"authors": [],
"arxiv_id": [],
"s2_corpus_id": [],
"intents": [],
"isInfluential": []
} | null | 536 | 0.007463 | null | null | null | null | null | null | null | null |
2e1a53ccd6336bcca97be6df46c70167dcf81c4e | 668597 | null | Real Reading Behavior | The most obvious observable activities that accompany reading are the eye fixations on various parts of the text. Our laboratory has now developed the technology for automatically measuring and recording the sequence and duration of eye fixations that readers make in a fairly natural reading situation. This paper reports on research in progress to use our observations of this real reading behavior to construct computational models of the cognitive processes involved in natural reading. | {
"name": [
"Thibadeau, Robert and",
"Just, Marcel and",
"Carpenter, Patricia"
],
"affiliation": [
null,
null,
null
]
} | null | null | 18th Annual Meeting of the Association for Computational Linguistics | 1980-06-01 | 5 | 6 | null | In the first part of this paper we consider some constraints placed on models of human language comprehension imposed by the eye fixation data. In the second part we propose a particular model whose processing time on each word of the text is proportional to human readers' fixation durations.tThe reason that eye fixation data provide a rich base for a theoretical model of language processing is that readers' pauses on various words of a text are distinctly non-uniform. Some words are looked at very briefly, while others are gazed at for one or two seconds. The longer pauses are associated with a need for more computation [2] . The span of apprehension is relatively small, so that at a normal reading distance a reader cannot extract the meaning of words that are in peripheral vision [6] . This means that a person can read only what he looks at, and for scientific texts read normally by college students, this involves looking at almost every word. Furthermore, the longer pauses can occur immediately on the word that triggers the additional computation [4] . Thus it is possible to infer the degree of computational load at each point in the text.The starting point for the computer model was the analysis of the eye fixations of 14 Carnegie-Mellon undergraduates reading 15 passages (each about 140 words long) taken from the science and technology sections of Newsweek and Time magazines (see the Appendix for a sample passage). The mean fixation duration on each word (or on larger, clause-like sectors) of the text were analyzed in a multiple regression analysis in which the independent variables were the structural prcperties of the texts that were believed to affect the fixation durations. The results showed that fixation durations were influenced by several levels of processing, such as the word level (longer, less frequent 1This research was supported in part by grants from the Alfred P. Sloan Foundation. the National Institute of Education (G-79-0119) and the National institute of Mental Health (MH-29617) words take longer to encode and lexically access), and the text level (more important parts of the text, like topics or definitions take longer to process than less important parts). This analysis generated a verbal description of a model of the reading process that is consistent with the observed fixation durations. The details of the data, analysis, and model are reported elsewhere [5] .Some of the most intriguing aspects of the eye-fixation data concern trends that we have failed to find. Trends within noun phrases and verb phrases seem notable by their absence. Most approaches to sentence comprehension suggest that when the head noun of a noun phrase is reached, a great deal of processing is necessary to aggregate the meanings of the various modifiers. But this is not the case. While determiners and some prepositions are looked at more briefly, adjectives, noun-classifiers, and head nouns receive approximately the same gaze durations. (These results assume that word length effects on gaze duration have been covaried out). Verb phrases, with the exception of modals, show a similar flat distribution. It is also notable that verbs are not gazed at longer than nouns, as might be expected. Such results pose an interesting problem for a system which not only recognizes words, but also provides for their interpretation.Anotl"ler interesting result is the failure to find any associations with length of sentences (a rough measure of their complexity) or ordinal word position within sentences (a rough measure of amount of processing). That is to say, whether or not word function, character-length or syllables, etc., are controlled, there are no systematic trends associated with ordinal word position or sentence length. There is an added gaze duration associated with punctuation marks. Periods add about 73 milliseconds, and other punctuation (including commas, quotes, etc.) add about 43 milliseconds each above what can be accounted for by character-length or other covariates.The strategy for making sense of these and other similar observations is to develop a computational framework in which they can be understood. That framework must be capable of performing such diverse functions as word recognition, semantic and syntactic analysis, and text analysis. Furthermore, it must permit the ready interaction among processes implied by these functions.The framework we have implemented to accomplish these ambitious goals is a production system fashioned closely after Anderson's ACT system [1] . Such a production system is composed of three parts, a collection of productions comprising knowledge about how to carry out processes, a declarative knowledge base against which those processes are carried out, and an interpreter which provides for the actual behavior of the productions.A production written for such a system is a condition-action pair, conceptually an 'if-then' concept, where the condition is assessed against a dynamically changing declarative know~edge base. If a condition is assessed as true (or matcheLl), the action of the production is taken to alter the knowJedge base. Altering the knowledge base leads to further potential for a match, so the production system will naturally cycle from match to match until no further productions can be matched.The sense in which processing is ¢otemporaneous is that all productions in memory are assessed for a match of their conditions before an action is taken, and then all productions whose. conditions succeed take action before the match proceeds again.This cycling, behavior provides a reference in establishing the basic synchrony of the system. The mapping from the behavior of the model to observed word gaze durations is on the basis of the number of match (or so-called recognition.act) cycles which the model requires to process each word.The physical implementation of the model is equipped at present to handle a dependency analysis of sentences of the sort of complexity we find in our texts (see the Appendix). There is nothing new to this analysis, and so it is not presented here. The implementation also exihibits some elementary word recognition, in that, for a few words, it contains productions recognizing letter configurations and shape parameters. The experience is, however, that the conventions which we have introduced provide a thoroughly 'debugged' initial framework. It is to the details of that framework that we now turn.Much of our initial effort in formulating such a parallel processing system has been concerned with making each processing cycle as efficient as possible with respect to the processing demands involved in reading to comprehend. To do this we allow that any number of productions can fire on e single cycle, each production contributing to the search for an interpretation of what is seen. Thus, for instance, the system may be actively working on a variety of processing tasks, and some may reach conclusion before others. The importance of concurrent processing is precisely that the reader may develop htPotheses in actively pursuing one processing avenue (such as syntax), and these hypotheses may influence other decisions (such as semantics) even before the former hypotheses are decided. Furthermore, hypotheses may be developed as expectations about words not yet seen, and these too should affect how those words are in fact seen. In effect, much of our initial effort has been in formulating how processes can interact in a collaborative effort to provide an interpretation.Collaboration in single recognition-act cycles is possible with carefully thought out conventions about the representation of knowledge in the knowledge base. As in ACT, every knowledge base element in our model is assigned a real.number activation level, which in the present system is regard d as a confidence value of sorts. Unlike ACT, the activation levels in our model are permitted to be positive or negative in sign, with the interpretation that a negative sign indicates the element is believed to be untrue.Coupled with this property of knowledge base elements are threshold properties associated with elements in the condition side of the productions. A threshold may be positive or negative, indicating a query about whether something is true or false with some confidence. As the system is used, there is a conventional threshold value above which knowledge is susceptible to being evaluated for inconsistency or contradiction, and below which knowledge is treated as hypothetical, in the examples below, this conventional threshold value is assumed. The condition elements can also include absence tests, so the system is capable of responding on the basis of the absence of an element at a desired confidence. Productions can also pick out knowledge that is only hypothetical using this device. But more importantly confidence in a result represents a manner in which productions can collaborate.The confidence values on knowledge base elements are manipulated using a special action called <SPEW>. Basically, this action takes the confidence in one knowledge-base element and adds a linearly weighted function of that confidence to other knowledge.base elements, If any such knowledge-base element is not, in fact, in the knowledge base, it will be added. The elements themselves can be regarded as propositions in a propositional network. Thus, one can view the function of productions as maintaining and constructing coherent fields of propositions about the text.Network representations of knowledge provide a natural indexing scheme, but to be practical on a computer such an indexing scheme needs augmentation.The indexing scheme must do several things at once. It must discriminate among the same objects used in different contexts, and it must also help resolve the difficult problem of two or more productions trying to build, or comment upon, the same knowledge structure concurrently. To give something of the flavor of the indexing scheme we have chosen: where other natural language understanding systems may create a token JOHN24 for a type JOHN, the number 24 in the present system does not simply distinquish this 'John' from others, it also places him within a dimensional space. In the exarnpies to follow the token numbers are generated for the sequential gazes, 1 for the first and so on. An obvious use of such a scheme is that several productions may establish expectations regarding the next word. If some subset of the productions establish the same expectation, then without matching they will create the properly distinguished tokens for that expectation.Consider one production written for this system:This production might be paraphrased as "lf you see some particular word (say WORD12) is some particular determiner (say THE), then from the confidence you have that that word is that determiner, assign (arithmetic ADD) that much confidence to the ideas that that word a) needs to modify something (has a determiner-tail, DETERMINER-TAIL12), b) the modification itself has a word expectation (say WORD-EXPECTATION12), c) which is to be fulfilled by the next word seen (WORD13).The indexing scheme is manifest in the use of the functions <TOK> and <NEXTTOIC,.It is important to be able to predict what a token will be, since in a parallel architecture several productions may be collaborating in building this expectation structure.Type-token and category membership searches are usually carried out within the interpreter itself. The exclamation point prefix on subelements, as in !WORD above, causes the matcher to perform an ISA search for candidate tokens which the decision The matcher is itself dynamically altered with respect to ISA knowledge as new tokens are created, and by explicit ISA knowledge manipulation on the part of specialized productions. This has certain computational advantages in keeping the match process efficient 2. The use of very many tokens, as implied by the above example, is important if one wants to explore the coordination of different processes in a parallel architecture.The next production would fire if the word following the determiner were an adjective:-->The number prefixes, as in "1WORD", are tokens local to the production that just serve to indicate different knowledge base tokens are sought not what their knowledge base tokens should be. This production says that if a word has a determiner tail expecting some word and that word has been observed to be an adjective, then bring the confidence at least to 0.0 that the word-expectation is the adjective, and have confidence that the word-expectation is the word following the adjective.The <SPEW> action of this production makes use of a weighting scheme which serves to alter the control of processing. In this framework any knowledge base element can serve as both a bit of knowledge (a link) and as a control value. The .1 number causes the confidence in the source of the spew to be multiplied by -1 before it is added to the target, (WORD-EXPECTATION :IS 1WORD). If this were the only production requesting this switch of confidence, the effect would be the effective deletion of this bit of knowledge from the knowledge base. If other productions were also switching this confidence, the system would wind up being confident that this word-expectation association is indeed not the case (explicitly false).The primary interest in formulating a model is in having as much 'processing' or decision-making as possible in a single recognition-act cycle. The general idea is that an average gaze duration of 250 milliseconds on a word represents few such cycles. The ability of the model to predict gaze duration, then, depends upon the sequential constraints holding among the collection of productions brought to the interpretation process. The 'determiner tail' productions illustrated above represent a processing sequence in most contexts; the second cannot fire until the first has deposited its contribution in the knowledge base. This is not a necessary feature of these two productions, since other productions can collaborate to cause the simultaneous matching of the two productions illustrated (we assume these are easy to imagine). However, one may note that since the 'determiner tail' productions are distributed over several word gazes, they at most contribute one processing cycle to the gaze on any word (besides the determiner). Thus, sequencing over words may not be expensive. Let us consider where it is computationally expensive.In contrast to rvghtward looking activities, the presence of strong sequencing constraints among productions is potentially costly in leftward looking activities. To illustrate how such costs might be reduced, consider a production with a fairly low threshold which assigns a need to find an agent for an action-process verb, and another production which says that if one has an animate noun preceding an action-process verb and that animate noun is the only possible candidate, then that animate noun is the agent. These two productions are likely to fire simultaneously if the latter one fires at all. They both create a need to find an agent and satisfy that need at once. They do not set word • expectations simply because the look-back at previous text tries to be efficient with regard to sequencing constraints. Had the need not been immediately fulfilled, it would serve as a promotion of other productions which might find other ways of fulfilling it, or of reinterpreting the use of the action-process verb (even questioning the ISA inference). It should be noted that the natural device for keeping these further productions in sequence from firing is having them make the absence test, as in--> ...suggest this might be an imperative, passive, el] ipse, etc.)The interpretation of the production is that "if you know with confidence that you have an action-process-verb and it needs an agent, but you don't know what that agent is, then suggest various reasons why you might not know with appropriately low confidence in them."2The matcher is a slightly altered form of the RETE Matcher written by Forgy for OPS4 [3] .The basic method of coordinating eye and mind in the present model is to make getting the next word contingent upon having completed the processing on the present one. In a production system architecture, this simply means that the match fails to turn up any productions whose conditions match to the knowledge base. Since elements in the knowledge base specify the need-to-know as wel: as what is known, the use of absence tests in the conditions of productions can 'shut off' further processing when it is deemed to be completed, or simply deemed to be unnecessary.It is by this device that the system demonstrates more processing on important information, 'shutting off' extended processing on that which is deemed, for any number of reasons, as less important.The model must, in addition to various ideas about coordination, be also capable of representing various ideas about dis-coordination. One potential instance of this in the present data is that while virtually every word is fixated upon at least once (recall that several fixations can count toward a single gaze), there are some words, AND, OR, BUT, A, THE, TO, and OF, with some likelihood of not being gazed upon at all (this accounts in some part for the fairly low average gaze duration on these words). This can be considered a dis-coordination of sorts, since to be this selective the reader must have some reasonable strong hypotheses about the words in question (the knowledge sources for these hypOtheses are potentially quite numerous, including the possibility of knowledge from peripheral vision).A production to implement this dis-coordination in the present system is: This production detects the presence of one of the above function words, and immediately shifts the present goal of interpreting a word (if it happens to be that) to gazing upon the word following the function word. It is important to recognize that the eye need not be on the function word for the system to know with reasonable confidence that the next word is a function word. The indexing scheme permits the system to form hypotheses strong enough to create effective reality (e.g., peripheral information and expectations can add up to the conclusion that the word is a function word). A second important property is that the system does not get confused with such skips, or in the usual case with such brief stays on these words. The reason again is because each word becomes a sort of local demon inheriting demon-like properties from general production, and by interaction with other knowledge base elements through the system of productions.This report has provided a brief description on work in progress to capture our observations of reading eye-movements in computational models of the reading process. We have illustrated some of the main properties of reading eye-movements and some of the main issues to arise. We have also illustrated within an implemented system how these issues might be addressed and explored in order to gain insight into more precise queries about real reading behavior.An example text:Flywheels are one of the oldest mechanical devices known to man. Every internal-combustion engine contains a small flywheel that converts the jerky motion of the piston into the smooth flow of energy that powers the drive shaft. The greater the mass of a flywheel and the faster it spins, the more energy can be stored in it. But its maximum spinning speed is limited by the strength of the material it is made from. If it spins too fast for its mass, any flywheel will fly apart. One type of flywheel consists of round sandwiches of fiberglas and rubber providing the maximum possible storage of energy when the wheel is confined in a small space as in an automobile.Another type, the "superflywheel", consists of a series of rimless spokes. This flywheel stores the maximum energy when space is unlimited. | null | null | null | null | Main paper:
:
In the first part of this paper we consider some constraints placed on models of human language comprehension imposed by the eye fixation data. In the second part we propose a particular model whose processing time on each word of the text is proportional to human readers' fixation durations.tThe reason that eye fixation data provide a rich base for a theoretical model of language processing is that readers' pauses on various words of a text are distinctly non-uniform. Some words are looked at very briefly, while others are gazed at for one or two seconds. The longer pauses are associated with a need for more computation [2] . The span of apprehension is relatively small, so that at a normal reading distance a reader cannot extract the meaning of words that are in peripheral vision [6] . This means that a person can read only what he looks at, and for scientific texts read normally by college students, this involves looking at almost every word. Furthermore, the longer pauses can occur immediately on the word that triggers the additional computation [4] . Thus it is possible to infer the degree of computational load at each point in the text.The starting point for the computer model was the analysis of the eye fixations of 14 Carnegie-Mellon undergraduates reading 15 passages (each about 140 words long) taken from the science and technology sections of Newsweek and Time magazines (see the Appendix for a sample passage). The mean fixation duration on each word (or on larger, clause-like sectors) of the text were analyzed in a multiple regression analysis in which the independent variables were the structural prcperties of the texts that were believed to affect the fixation durations. The results showed that fixation durations were influenced by several levels of processing, such as the word level (longer, less frequent 1This research was supported in part by grants from the Alfred P. Sloan Foundation. the National Institute of Education (G-79-0119) and the National institute of Mental Health (MH-29617) words take longer to encode and lexically access), and the text level (more important parts of the text, like topics or definitions take longer to process than less important parts). This analysis generated a verbal description of a model of the reading process that is consistent with the observed fixation durations. The details of the data, analysis, and model are reported elsewhere [5] .Some of the most intriguing aspects of the eye-fixation data concern trends that we have failed to find. Trends within noun phrases and verb phrases seem notable by their absence. Most approaches to sentence comprehension suggest that when the head noun of a noun phrase is reached, a great deal of processing is necessary to aggregate the meanings of the various modifiers. But this is not the case. While determiners and some prepositions are looked at more briefly, adjectives, noun-classifiers, and head nouns receive approximately the same gaze durations. (These results assume that word length effects on gaze duration have been covaried out). Verb phrases, with the exception of modals, show a similar flat distribution. It is also notable that verbs are not gazed at longer than nouns, as might be expected. Such results pose an interesting problem for a system which not only recognizes words, but also provides for their interpretation.Anotl"ler interesting result is the failure to find any associations with length of sentences (a rough measure of their complexity) or ordinal word position within sentences (a rough measure of amount of processing). That is to say, whether or not word function, character-length or syllables, etc., are controlled, there are no systematic trends associated with ordinal word position or sentence length. There is an added gaze duration associated with punctuation marks. Periods add about 73 milliseconds, and other punctuation (including commas, quotes, etc.) add about 43 milliseconds each above what can be accounted for by character-length or other covariates.The strategy for making sense of these and other similar observations is to develop a computational framework in which they can be understood. That framework must be capable of performing such diverse functions as word recognition, semantic and syntactic analysis, and text analysis. Furthermore, it must permit the ready interaction among processes implied by these functions.The framework we have implemented to accomplish these ambitious goals is a production system fashioned closely after Anderson's ACT system [1] . Such a production system is composed of three parts, a collection of productions comprising knowledge about how to carry out processes, a declarative knowledge base against which those processes are carried out, and an interpreter which provides for the actual behavior of the productions.A production written for such a system is a condition-action pair, conceptually an 'if-then' concept, where the condition is assessed against a dynamically changing declarative know~edge base. If a condition is assessed as true (or matcheLl), the action of the production is taken to alter the knowJedge base. Altering the knowledge base leads to further potential for a match, so the production system will naturally cycle from match to match until no further productions can be matched.The sense in which processing is ¢otemporaneous is that all productions in memory are assessed for a match of their conditions before an action is taken, and then all productions whose. conditions succeed take action before the match proceeds again.This cycling, behavior provides a reference in establishing the basic synchrony of the system. The mapping from the behavior of the model to observed word gaze durations is on the basis of the number of match (or so-called recognition.act) cycles which the model requires to process each word.The physical implementation of the model is equipped at present to handle a dependency analysis of sentences of the sort of complexity we find in our texts (see the Appendix). There is nothing new to this analysis, and so it is not presented here. The implementation also exihibits some elementary word recognition, in that, for a few words, it contains productions recognizing letter configurations and shape parameters. The experience is, however, that the conventions which we have introduced provide a thoroughly 'debugged' initial framework. It is to the details of that framework that we now turn.Much of our initial effort in formulating such a parallel processing system has been concerned with making each processing cycle as efficient as possible with respect to the processing demands involved in reading to comprehend. To do this we allow that any number of productions can fire on e single cycle, each production contributing to the search for an interpretation of what is seen. Thus, for instance, the system may be actively working on a variety of processing tasks, and some may reach conclusion before others. The importance of concurrent processing is precisely that the reader may develop htPotheses in actively pursuing one processing avenue (such as syntax), and these hypotheses may influence other decisions (such as semantics) even before the former hypotheses are decided. Furthermore, hypotheses may be developed as expectations about words not yet seen, and these too should affect how those words are in fact seen. In effect, much of our initial effort has been in formulating how processes can interact in a collaborative effort to provide an interpretation.Collaboration in single recognition-act cycles is possible with carefully thought out conventions about the representation of knowledge in the knowledge base. As in ACT, every knowledge base element in our model is assigned a real.number activation level, which in the present system is regard d as a confidence value of sorts. Unlike ACT, the activation levels in our model are permitted to be positive or negative in sign, with the interpretation that a negative sign indicates the element is believed to be untrue.Coupled with this property of knowledge base elements are threshold properties associated with elements in the condition side of the productions. A threshold may be positive or negative, indicating a query about whether something is true or false with some confidence. As the system is used, there is a conventional threshold value above which knowledge is susceptible to being evaluated for inconsistency or contradiction, and below which knowledge is treated as hypothetical, in the examples below, this conventional threshold value is assumed. The condition elements can also include absence tests, so the system is capable of responding on the basis of the absence of an element at a desired confidence. Productions can also pick out knowledge that is only hypothetical using this device. But more importantly confidence in a result represents a manner in which productions can collaborate.The confidence values on knowledge base elements are manipulated using a special action called <SPEW>. Basically, this action takes the confidence in one knowledge-base element and adds a linearly weighted function of that confidence to other knowledge.base elements, If any such knowledge-base element is not, in fact, in the knowledge base, it will be added. The elements themselves can be regarded as propositions in a propositional network. Thus, one can view the function of productions as maintaining and constructing coherent fields of propositions about the text.Network representations of knowledge provide a natural indexing scheme, but to be practical on a computer such an indexing scheme needs augmentation.The indexing scheme must do several things at once. It must discriminate among the same objects used in different contexts, and it must also help resolve the difficult problem of two or more productions trying to build, or comment upon, the same knowledge structure concurrently. To give something of the flavor of the indexing scheme we have chosen: where other natural language understanding systems may create a token JOHN24 for a type JOHN, the number 24 in the present system does not simply distinquish this 'John' from others, it also places him within a dimensional space. In the exarnpies to follow the token numbers are generated for the sequential gazes, 1 for the first and so on. An obvious use of such a scheme is that several productions may establish expectations regarding the next word. If some subset of the productions establish the same expectation, then without matching they will create the properly distinguished tokens for that expectation.Consider one production written for this system:This production might be paraphrased as "lf you see some particular word (say WORD12) is some particular determiner (say THE), then from the confidence you have that that word is that determiner, assign (arithmetic ADD) that much confidence to the ideas that that word a) needs to modify something (has a determiner-tail, DETERMINER-TAIL12), b) the modification itself has a word expectation (say WORD-EXPECTATION12), c) which is to be fulfilled by the next word seen (WORD13).The indexing scheme is manifest in the use of the functions <TOK> and <NEXTTOIC,.It is important to be able to predict what a token will be, since in a parallel architecture several productions may be collaborating in building this expectation structure.Type-token and category membership searches are usually carried out within the interpreter itself. The exclamation point prefix on subelements, as in !WORD above, causes the matcher to perform an ISA search for candidate tokens which the decision The matcher is itself dynamically altered with respect to ISA knowledge as new tokens are created, and by explicit ISA knowledge manipulation on the part of specialized productions. This has certain computational advantages in keeping the match process efficient 2. The use of very many tokens, as implied by the above example, is important if one wants to explore the coordination of different processes in a parallel architecture.The next production would fire if the word following the determiner were an adjective:-->The number prefixes, as in "1WORD", are tokens local to the production that just serve to indicate different knowledge base tokens are sought not what their knowledge base tokens should be. This production says that if a word has a determiner tail expecting some word and that word has been observed to be an adjective, then bring the confidence at least to 0.0 that the word-expectation is the adjective, and have confidence that the word-expectation is the word following the adjective.The <SPEW> action of this production makes use of a weighting scheme which serves to alter the control of processing. In this framework any knowledge base element can serve as both a bit of knowledge (a link) and as a control value. The .1 number causes the confidence in the source of the spew to be multiplied by -1 before it is added to the target, (WORD-EXPECTATION :IS 1WORD). If this were the only production requesting this switch of confidence, the effect would be the effective deletion of this bit of knowledge from the knowledge base. If other productions were also switching this confidence, the system would wind up being confident that this word-expectation association is indeed not the case (explicitly false).The primary interest in formulating a model is in having as much 'processing' or decision-making as possible in a single recognition-act cycle. The general idea is that an average gaze duration of 250 milliseconds on a word represents few such cycles. The ability of the model to predict gaze duration, then, depends upon the sequential constraints holding among the collection of productions brought to the interpretation process. The 'determiner tail' productions illustrated above represent a processing sequence in most contexts; the second cannot fire until the first has deposited its contribution in the knowledge base. This is not a necessary feature of these two productions, since other productions can collaborate to cause the simultaneous matching of the two productions illustrated (we assume these are easy to imagine). However, one may note that since the 'determiner tail' productions are distributed over several word gazes, they at most contribute one processing cycle to the gaze on any word (besides the determiner). Thus, sequencing over words may not be expensive. Let us consider where it is computationally expensive.In contrast to rvghtward looking activities, the presence of strong sequencing constraints among productions is potentially costly in leftward looking activities. To illustrate how such costs might be reduced, consider a production with a fairly low threshold which assigns a need to find an agent for an action-process verb, and another production which says that if one has an animate noun preceding an action-process verb and that animate noun is the only possible candidate, then that animate noun is the agent. These two productions are likely to fire simultaneously if the latter one fires at all. They both create a need to find an agent and satisfy that need at once. They do not set word • expectations simply because the look-back at previous text tries to be efficient with regard to sequencing constraints. Had the need not been immediately fulfilled, it would serve as a promotion of other productions which might find other ways of fulfilling it, or of reinterpreting the use of the action-process verb (even questioning the ISA inference). It should be noted that the natural device for keeping these further productions in sequence from firing is having them make the absence test, as in--> ...suggest this might be an imperative, passive, el] ipse, etc.)The interpretation of the production is that "if you know with confidence that you have an action-process-verb and it needs an agent, but you don't know what that agent is, then suggest various reasons why you might not know with appropriately low confidence in them."2The matcher is a slightly altered form of the RETE Matcher written by Forgy for OPS4 [3] .The basic method of coordinating eye and mind in the present model is to make getting the next word contingent upon having completed the processing on the present one. In a production system architecture, this simply means that the match fails to turn up any productions whose conditions match to the knowledge base. Since elements in the knowledge base specify the need-to-know as wel: as what is known, the use of absence tests in the conditions of productions can 'shut off' further processing when it is deemed to be completed, or simply deemed to be unnecessary.It is by this device that the system demonstrates more processing on important information, 'shutting off' extended processing on that which is deemed, for any number of reasons, as less important.The model must, in addition to various ideas about coordination, be also capable of representing various ideas about dis-coordination. One potential instance of this in the present data is that while virtually every word is fixated upon at least once (recall that several fixations can count toward a single gaze), there are some words, AND, OR, BUT, A, THE, TO, and OF, with some likelihood of not being gazed upon at all (this accounts in some part for the fairly low average gaze duration on these words). This can be considered a dis-coordination of sorts, since to be this selective the reader must have some reasonable strong hypotheses about the words in question (the knowledge sources for these hypOtheses are potentially quite numerous, including the possibility of knowledge from peripheral vision).A production to implement this dis-coordination in the present system is: This production detects the presence of one of the above function words, and immediately shifts the present goal of interpreting a word (if it happens to be that) to gazing upon the word following the function word. It is important to recognize that the eye need not be on the function word for the system to know with reasonable confidence that the next word is a function word. The indexing scheme permits the system to form hypotheses strong enough to create effective reality (e.g., peripheral information and expectations can add up to the conclusion that the word is a function word). A second important property is that the system does not get confused with such skips, or in the usual case with such brief stays on these words. The reason again is because each word becomes a sort of local demon inheriting demon-like properties from general production, and by interaction with other knowledge base elements through the system of productions.This report has provided a brief description on work in progress to capture our observations of reading eye-movements in computational models of the reading process. We have illustrated some of the main properties of reading eye-movements and some of the main issues to arise. We have also illustrated within an implemented system how these issues might be addressed and explored in order to gain insight into more precise queries about real reading behavior.An example text:Flywheels are one of the oldest mechanical devices known to man. Every internal-combustion engine contains a small flywheel that converts the jerky motion of the piston into the smooth flow of energy that powers the drive shaft. The greater the mass of a flywheel and the faster it spins, the more energy can be stored in it. But its maximum spinning speed is limited by the strength of the material it is made from. If it spins too fast for its mass, any flywheel will fly apart. One type of flywheel consists of round sandwiches of fiberglas and rubber providing the maximum possible storage of energy when the wheel is confined in a small space as in an automobile.Another type, the "superflywheel", consists of a series of rimless spokes. This flywheel stores the maximum energy when space is unlimited.
Appendix:
| null | null | null | null | {
"paperhash": [],
"title": [],
"abstract": [],
"authors": [],
"arxiv_id": [],
"s2_corpus_id": [],
"intents": [],
"isInfluential": []
} | null | 536 | 0.011194 | null | null | null | null | null | null | null | null |
7332de8c57cd29a2c44cbde493f691974545a8f0 | 597284 | null | Paralanguage in Computer Mediated Communication | This paper reports on some of the components of person to person communication mediated by computer conferencing systems. Transcripts from two systems were analysed: the Electronic Information and Exchange System (EIES), based at the | {
"name": [
"Carey, John"
],
"affiliation": [
null
]
} | null | null | 18th Annual Meeting of the Association for Computational Linguistics | 1980-06-01 | 5 | 98 | null | The following elements have been isolated within the transcripts and given a preliminary designation as paralinguistic features.These features include non standard spellings of words which bring attention to sound qualities.The spelling may serve to mark a regional accent or an idiosyncratic manner of speech.Often, the misspelling involves repetition of a vowel (drawl) or a final consonant (released or held consonant, with final stress).In addition, there are many examples of non standard contractions.A single contraction in a message appears to bring attention (stress) to the word.A series of contractions in a single message appears to serve as a tempo marker, indicating a quick pace in composing the message./biznls/ /weeeeell/ /breakkk/ 2. THE FRAMEComputer conferencing may be described as a frame of social activity in Goffman's terms (1974) .The computer conferencing frame is characterized by an exchange of print communication between or among individuals. That is, it may involve person to person or person to group communication.The information is typed on a computer terminal, transmitted via a telephone line to a central computer where it is processed and stored until the intended receiver (also using a computer terminal and a telephone llne) enters the system.The received information is either printed on paper or displayed on a television screen.The exchange can be in real time, if the users are on the system simultaneously and linked together in a common notepad.More typically, the exchange is asynchronous with several hours or a few days lapse between sending and receiving.In all of the transcripts examined for this study, the composer of the message typed it into the system. Further, the systems were used for many purposes:/y'all/ /Miami Dade Cmt7 Coll Life Lab Pgm/ Figure i. Examples of Vocal SpellingSoma of the spellings shown above can occur through a glitch in the system or an unintended error by the composer of the message.Typically, the full context helps the reader to discern if the spelling was intentional.Often, people use words to describe their "tone of voice" in the message.This may be inserted as a parenthetical comment within a sentence, in which case it is likely to mark that sentence alone.Alternatively, it may be located at the beginning or end of a message.In these instances, it often provides a tone for the entire message.1. The research was supported by DHEW Grant No. 54-P-71362/2/2-01In addition, vocal segregates (e.g. uh huh, hmmm, yuk yuk) are written commonly within the body of texts./What was decided?I like the idea, but then again, it was mine Oshe said blushingly)./ /Boo, boo Horror of horrors! ti65 DOESN'T seem to cure all the problems involved in transmitting files./ While some users borrow a standard letter format, others treat the page space as a canvass on which they paint wi~h words and letters, or an advertisement layout in which they are free to leave space between words, skip lines, and paragraph each new sentence.Some spatial arrays are actual graphics: arrangements of letters to create a picture. Hiltz and Turoff (1978) note the heavy use of graphics at Christmas time, when people send greeting cards through the conferencing system.Zn day to day messaging, users often leave space between words (indicating pause, or setting off a word or phrase), run words together (quickening of tempo, onomatopoeic effect), skip lines within a paragraph (~o setoff a word, phrase or sentence), and crea~e paragraphs to lend visual support to the entire message or items within it. In addition, many messages contain headlines, as in newspaper writing. Gr-,,-m~ical markers such as capitalization, periods, ccnmlaa, quotation marks, and parentheses are manipulated by users to add stress, indicaue pause, modify the tone of a lexlcal item and signal a chan~e of voice by the composer.For eY-mple, a user will employ three exclamation marks at the end of a sentence ~o lend incensity to his point. A word in the middle of a sentence (or one sentence in a message) will be capitalized and ~hereby receive stress.A series of des! os between syllables of a word can serve to hold the preceding syllable and indicate s~ress upon it or the succeeding syllable.Parentheses and quotation marks are used commonly to indicate that the words contained within them are to be heard with a different tone than the rest of the message.A series of periods are used to indicate pause, as well as to indicate in~ernal and terminal Junctures.For example, in some messages, composers do not use commas. At points where a com-m is appropriate, three periods are employed. At the end of the sentence, several periods (the number can vary from 4 to more than 20) are used. This system indicates to ~he reader hor.h the grammatical boundary and the length of pause between words.The Electronic Information and Exchange System employs some of these gr---,-tical marker manipulations in the interface between user and system.For example, they instruct a user to respond with question marks when he does not know what to do at a comm"nd point. One question mark indicates "I don't understand what EIES wants here," and will yield a brief explanation from the system. Two question marks indicate "I am ver 7 confused" and yield a longer explanation.Three question marks indicate "I am totally lost" and put the user in direct touch with the system monitor. The absence of certain features or expected work in composition may also lend a tone to the message. For example, a user may not correct spelling errors or glitches introduced by the system. Similarly, he may pay no attention to paragraphing or capltalization. The absence of such features, particularly if they are clustered together in a single message, can convey a relaxed tone of familiarity with the receiver or quickness of pacing (e.g. when the sender has a lot of work to do and must compose the message quickly). | null | null | The term paralanguage is used broadly in this report. It includes those vocal features outlined by Trager (1964) as well as the prosodic system of Crystal (1969) . Both are concerned with the investigation of linguistic phenomena which generally fall outside the boundaries of phonology, morphology and lexical analysis. These phenomena are the voice qualities and tones which communicate expressive feelings, indicate the age, health and sex of a speaker, modify the meanings of words, and help to regulate interaction between speakers.Paralanguage becomes an issue in print communication when individuals attempt to transcribe (and analyse) an oral presentation, or write a script which is to be delivered orally.In addition, paralinguistlc analysis can be directed towards forms of print which mimic or contain elements of oral co~munlcatlon.These include comic strips, novels, graffitti, and computer conferencing (see Crystal and Davy 1969) .The research reported here is not concerned with a direct comparison between face-to-face and computer mediated communication.Such a comparison is useful, e.g. it can help us to understand how one form borrows elements from the other (see section 5.), or aid in the selectldn of the medium which is more appropriate for a given task. However, the intent here is simpler: to isolate some of the paralingulstic features which are present in computer mediated communication and to begin to map the patterning of those features. simple message sending (.electronic mail), task related conferencing, and fun (e.g. Jokes and conferences on popular topics).Bills for usage were paid by the organizations involved, not the individuals themselves. These elements within the frame may affect the style of interaction.One concern in frame analysis is to understand differences in a situation which make a difference.Clearly, there is a need to investigate conditions not included in this study in order to gain a broader understanding of paralinguistic usage.Among the conditions which might make a difference are: the presence of a secretary in the flow of information; usage based upon narrow task communications only; and situations where there is a direct cost to the user.Ig can be noted, first, that some features mark a short syllabic or polysyllabic segment (e.g. capitalization, contraction, and vocal segregates), while others mark full sentences or the entire message (e.g. a series of exclamation points, letter graphics, or an initial parenthetical coeN"ent). Second, it is revealing that many of these features have an analogic structure: in some manner, they are llke the tone they represent. For example, a user may employ more or fewer periods, more or fewer question marks ro indicate degrees of pause or degrees of perplexity.Paralanguage in everyday conversation is highly analogic and represents feelings, moods and states of health which do not (apparently) lend themselves to the digital structure of words.Parallngulstic features in computer conferenclng occur, often, at points of change in a message: change of pace, change of topic, change of ~one.In addition, many of the features rely upon a contrastive structure to co---unicate meaning.That is, a message which is typed in all caps does not communicate greater intensity or stress.Capitalization must occur contrastlvely over one or two words in an othertrlse normal sentence or over one or two sentences in a message which contains some normal capitalization.Most paralinguistlc features can have more than one meaning.Reviewed in is lation, a feature might indicate a relaxed tone, an intimate relation with the receiver, or simply sloppiness in composition.Readers must rely upon the surrounding context (both words and other paralinguistic features) to narrow the range of possible meanings.The intended receiver of a message, as well as an outsider who attempts to analyse transcripts, must cope with the interpretation of paralinguistic features. Initially, the reader must distinguish glitches in the system and unintended typing errors from intentional use of repetition, spacing, etc. Subsequently, the reader must examine the immediate context of the feature and compare the usage with similar patterns in the same message, in other messages by the composer, and/or in other messages by the general population of users. | The findings presented in this study are taken from a limited set of contexts. For this reason, they must be regarded as a first approximation of paralinguistic code structure in computer conferencing. Moreover, the findings do not suggest that a clear code exists for the community of users. Rather, the code appears to be in a stage of development and learning.The study has helped to define some differences among users which appear to make a difference in the parelinguistic features they employ. In the corpus of transcripts examined, usage varied between new and experienced participants, as well as between infrequent and frequent participants. Generally, experienced and frequent participants employed more paralinguistic features. However, idiosyncratic patterns appear to be more important in determining usage. The findings serve more to define questions for subsequent study than to provide answers about user variations.In addition, It is clear that the characteristics of the computer terminals (TI 745s, primarily), as well as system characteristics, provided many of the components or "bricks" with which paralinguistlc features were constructed. For example, the repeat key on the terminal allowed users to create certain forms of graphics. Also, star keys, dollar signs, colons and other available keys were employed to communicate paralinguistic information. System terms to describe a mode of operation (e.g. notepad, scratchpad, message, conference) may also influence development of a code of usage by suggesting a more formal or informal exchange.Finally, it may be noted that early in their usage, some participants appeared to borrow formats from other media with which they were familiar (e.g. business letters, telegrams, and telephone conversations). Over time, patterns of usage converged somewhat. However, idiosyncratic variation remained strong.A few conclusions can be drawn from this study. First, the presence of paralinguistic features in computer conferencing and the effort by users to communicate more information than can be carried by the words themselves, suggest that people feel it is important to be able to communicate tonal and expressive information. Second, it is not easy to communicate this information. Users must work in computer conferencing to communicate information about their feelings and state of health which naturally accompanies speech. While there does not appear to be a unified and identlfiable code of paralinguistic features within conferencing systems or among users of the systems, the collective behavior of participants may be creating one. | Main paper:
features:
The following elements have been isolated within the transcripts and given a preliminary designation as paralinguistic features.These features include non standard spellings of words which bring attention to sound qualities.The spelling may serve to mark a regional accent or an idiosyncratic manner of speech.Often, the misspelling involves repetition of a vowel (drawl) or a final consonant (released or held consonant, with final stress).In addition, there are many examples of non standard contractions.A single contraction in a message appears to bring attention (stress) to the word.A series of contractions in a single message appears to serve as a tempo marker, indicating a quick pace in composing the message./biznls/ /weeeeell/ /breakkk/ 2. THE FRAMEComputer conferencing may be described as a frame of social activity in Goffman's terms (1974) .The computer conferencing frame is characterized by an exchange of print communication between or among individuals. That is, it may involve person to person or person to group communication.The information is typed on a computer terminal, transmitted via a telephone line to a central computer where it is processed and stored until the intended receiver (also using a computer terminal and a telephone llne) enters the system.The received information is either printed on paper or displayed on a television screen.The exchange can be in real time, if the users are on the system simultaneously and linked together in a common notepad.More typically, the exchange is asynchronous with several hours or a few days lapse between sending and receiving.In all of the transcripts examined for this study, the composer of the message typed it into the system. Further, the systems were used for many purposes:/y'all/ /Miami Dade Cmt7 Coll Life Lab Pgm/ Figure i. Examples of Vocal SpellingSoma of the spellings shown above can occur through a glitch in the system or an unintended error by the composer of the message.Typically, the full context helps the reader to discern if the spelling was intentional.Often, people use words to describe their "tone of voice" in the message.This may be inserted as a parenthetical comment within a sentence, in which case it is likely to mark that sentence alone.Alternatively, it may be located at the beginning or end of a message.In these instances, it often provides a tone for the entire message.1. The research was supported by DHEW Grant No. 54-P-71362/2/2-01In addition, vocal segregates (e.g. uh huh, hmmm, yuk yuk) are written commonly within the body of texts./What was decided?I like the idea, but then again, it was mine Oshe said blushingly)./ /Boo, boo Horror of horrors! ti65 DOESN'T seem to cure all the problems involved in transmitting files./ While some users borrow a standard letter format, others treat the page space as a canvass on which they paint wi~h words and letters, or an advertisement layout in which they are free to leave space between words, skip lines, and paragraph each new sentence.Some spatial arrays are actual graphics: arrangements of letters to create a picture. Hiltz and Turoff (1978) note the heavy use of graphics at Christmas time, when people send greeting cards through the conferencing system.Zn day to day messaging, users often leave space between words (indicating pause, or setting off a word or phrase), run words together (quickening of tempo, onomatopoeic effect), skip lines within a paragraph (~o setoff a word, phrase or sentence), and crea~e paragraphs to lend visual support to the entire message or items within it. In addition, many messages contain headlines, as in newspaper writing. Gr-,,-m~ical markers such as capitalization, periods, ccnmlaa, quotation marks, and parentheses are manipulated by users to add stress, indicaue pause, modify the tone of a lexlcal item and signal a chan~e of voice by the composer.For eY-mple, a user will employ three exclamation marks at the end of a sentence ~o lend incensity to his point. A word in the middle of a sentence (or one sentence in a message) will be capitalized and ~hereby receive stress.A series of des! os between syllables of a word can serve to hold the preceding syllable and indicate s~ress upon it or the succeeding syllable.Parentheses and quotation marks are used commonly to indicate that the words contained within them are to be heard with a different tone than the rest of the message.A series of periods are used to indicate pause, as well as to indicate in~ernal and terminal Junctures.For example, in some messages, composers do not use commas. At points where a com-m is appropriate, three periods are employed. At the end of the sentence, several periods (the number can vary from 4 to more than 20) are used. This system indicates to ~he reader hor.h the grammatical boundary and the length of pause between words.The Electronic Information and Exchange System employs some of these gr---,-tical marker manipulations in the interface between user and system.For example, they instruct a user to respond with question marks when he does not know what to do at a comm"nd point. One question mark indicates "I don't understand what EIES wants here," and will yield a brief explanation from the system. Two question marks indicate "I am ver 7 confused" and yield a longer explanation.Three question marks indicate "I am totally lost" and put the user in direct touch with the system monitor. The absence of certain features or expected work in composition may also lend a tone to the message. For example, a user may not correct spelling errors or glitches introduced by the system. Similarly, he may pay no attention to paragraphing or capltalization. The absence of such features, particularly if they are clustered together in a single message, can convey a relaxed tone of familiarity with the receiver or quickness of pacing (e.g. when the sender has a lot of work to do and must compose the message quickly).
patterning of features:
Ig can be noted, first, that some features mark a short syllabic or polysyllabic segment (e.g. capitalization, contraction, and vocal segregates), while others mark full sentences or the entire message (e.g. a series of exclamation points, letter graphics, or an initial parenthetical coeN"ent). Second, it is revealing that many of these features have an analogic structure: in some manner, they are llke the tone they represent. For example, a user may employ more or fewer periods, more or fewer question marks ro indicate degrees of pause or degrees of perplexity.Paralanguage in everyday conversation is highly analogic and represents feelings, moods and states of health which do not (apparently) lend themselves to the digital structure of words.Parallngulstic features in computer conferenclng occur, often, at points of change in a message: change of pace, change of topic, change of ~one.In addition, many of the features rely upon a contrastive structure to co---unicate meaning.That is, a message which is typed in all caps does not communicate greater intensity or stress.Capitalization must occur contrastlvely over one or two words in an othertrlse normal sentence or over one or two sentences in a message which contains some normal capitalization.Most paralinguistlc features can have more than one meaning.Reviewed in is lation, a feature might indicate a relaxed tone, an intimate relation with the receiver, or simply sloppiness in composition.Readers must rely upon the surrounding context (both words and other paralinguistic features) to narrow the range of possible meanings.The intended receiver of a message, as well as an outsider who attempts to analyse transcripts, must cope with the interpretation of paralinguistic features. Initially, the reader must distinguish glitches in the system and unintended typing errors from intentional use of repetition, spacing, etc. Subsequently, the reader must examine the immediate context of the feature and compare the usage with similar patterns in the same message, in other messages by the composer, and/or in other messages by the general population of users.
development of a code:
The findings presented in this study are taken from a limited set of contexts. For this reason, they must be regarded as a first approximation of paralinguistic code structure in computer conferencing. Moreover, the findings do not suggest that a clear code exists for the community of users. Rather, the code appears to be in a stage of development and learning.The study has helped to define some differences among users which appear to make a difference in the parelinguistic features they employ. In the corpus of transcripts examined, usage varied between new and experienced participants, as well as between infrequent and frequent participants. Generally, experienced and frequent participants employed more paralinguistic features. However, idiosyncratic patterns appear to be more important in determining usage. The findings serve more to define questions for subsequent study than to provide answers about user variations.In addition, It is clear that the characteristics of the computer terminals (TI 745s, primarily), as well as system characteristics, provided many of the components or "bricks" with which paralinguistlc features were constructed. For example, the repeat key on the terminal allowed users to create certain forms of graphics. Also, star keys, dollar signs, colons and other available keys were employed to communicate paralinguistic information. System terms to describe a mode of operation (e.g. notepad, scratchpad, message, conference) may also influence development of a code of usage by suggesting a more formal or informal exchange.Finally, it may be noted that early in their usage, some participants appeared to borrow formats from other media with which they were familiar (e.g. business letters, telegrams, and telephone conversations). Over time, patterns of usage converged somewhat. However, idiosyncratic variation remained strong.
conclusion:
A few conclusions can be drawn from this study. First, the presence of paralinguistic features in computer conferencing and the effort by users to communicate more information than can be carried by the words themselves, suggest that people feel it is important to be able to communicate tonal and expressive information. Second, it is not easy to communicate this information. Users must work in computer conferencing to communicate information about their feelings and state of health which naturally accompanies speech. While there does not appear to be a unified and identlfiable code of paralinguistic features within conferencing systems or among users of the systems, the collective behavior of participants may be creating one.
i. introduction:
The term paralanguage is used broadly in this report. It includes those vocal features outlined by Trager (1964) as well as the prosodic system of Crystal (1969) . Both are concerned with the investigation of linguistic phenomena which generally fall outside the boundaries of phonology, morphology and lexical analysis. These phenomena are the voice qualities and tones which communicate expressive feelings, indicate the age, health and sex of a speaker, modify the meanings of words, and help to regulate interaction between speakers.Paralanguage becomes an issue in print communication when individuals attempt to transcribe (and analyse) an oral presentation, or write a script which is to be delivered orally.In addition, paralinguistlc analysis can be directed towards forms of print which mimic or contain elements of oral co~munlcatlon.These include comic strips, novels, graffitti, and computer conferencing (see Crystal and Davy 1969) .The research reported here is not concerned with a direct comparison between face-to-face and computer mediated communication.Such a comparison is useful, e.g. it can help us to understand how one form borrows elements from the other (see section 5.), or aid in the selectldn of the medium which is more appropriate for a given task. However, the intent here is simpler: to isolate some of the paralingulstic features which are present in computer mediated communication and to begin to map the patterning of those features. simple message sending (.electronic mail), task related conferencing, and fun (e.g. Jokes and conferences on popular topics).Bills for usage were paid by the organizations involved, not the individuals themselves. These elements within the frame may affect the style of interaction.One concern in frame analysis is to understand differences in a situation which make a difference.Clearly, there is a need to investigate conditions not included in this study in order to gain a broader understanding of paralinguistic usage.Among the conditions which might make a difference are: the presence of a secretary in the flow of information; usage based upon narrow task communications only; and situations where there is a direct cost to the user.
Appendix:
| null | null | null | null | {
"paperhash": [
"crystal|prosodic_systems_and_intonation_in_english",
"crystal|investigating_english_style"
],
"title": [
"Prosodic Systems and Intonation in English",
"Investigating English Style"
],
"abstract": [
"Preface 1. Some preliminary considerations 2. Past work on prosodic features 3. Voice-quality and sound attributes in prosodic study 4. The prosodic features of English 5. The intonation system of English 6. The grammar of intonation 7. The semantics of intonation Bibliography Index of persons Index of subjects.",
"A series to meet the need for books on modern English that are both up-to-date and authoritative.For the scholar, the teacher, the student and the general reader, but especially for English-speaking students of language and linguistics in institutions where English is the language of instruction, or advanced specialist students of English in universities where English is taught as a foreign language."
],
"authors": [
{
"name": [
"D. Crystal"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"D. Crystal",
"Derek Davy"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
}
],
"arxiv_id": [
null,
null
],
"s2_corpus_id": [
"60515479",
"59347243"
],
"intents": [
[
"background"
],
[
"background"
]
],
"isInfluential": [
false,
false
]
} | null | 536 | 0.182836 | null | null | null | null | null | null | null | null |
f835a17a958268f7070e1fbb683568053f30c7ff | 16980721 | null | {PHRAN} - A Knowledge-Based Natural Language Understander | We have developed an approach to natural language processing in which the natural language processor is viewed as a knowledge-based system whose knowledge is about the meanings of the utterances of its language. The approach is orzented around the phrase rather than the word as the basic unit. We believe that this paradi~ for language processing not only extends the capabilities of other natural language systems, but handles those tasks that previous systems could perform in e more systematic and extensible manner. We have construqted a natural language analysis program called PHRAN (PHRasal ANalyzer) based in this approach. This model has a number of advantages over existing systems, including the ability to understand a wider variety of language utterances, increased processlng speed in some cases, a clear separation of control structure from data structure, a knowledge base that could be shared by a language productxon mechanism, greater ease of extensibility, and the ability to store some useful forms of knowledge that cannot readily be added to other systems. | {
"name": [
"Wilensky, Robert and",
"Arens, Yigal"
],
"affiliation": [
null,
null
]
} | null | null | 18th Annual Meeting of the Association for Computational Linguistics | 1980-06-01 | 16 | 35 | null | The problem of constructing a natural language ~rocessing system may be viewed as a problem oz constructing a knowledge-based system.From this orientation, the questions to ask are the following:What sort of knowledge does a system need about a language in order to understand the meaning of an utterance or to produce an utterance in that language? How can this knowledge about one's language best be represented, organized and utilized? Can these tasks be achieved so that the resulting system is easy to add to and modify? Moreover, can the system be made to emulate a human language user?Existing natural language processing systems vary considerably in the kinds of knowledge about language they possess, as well as in how thxs knowledge is represented, organized and utilized. However, most of these systems are based on ideas about language that do not come to grips with the fact that a natural, language processor neeos a great deal of knowledge aoout the meaning of its language's utterances.Part of the problem is that most current natural language systems assume that the meaning of a natural language utterance can be computed as a function of the constituents of the utterance. The basic constituents of utterances are assumed to be words, and all the knowledge the system has about ~he semantics of its language zs stored at the word level (~i~nbaum etal, 1979) (Riesbeck et al, 1975) (Wilks, 197~) (Woods, 1970) . However, many natural language utterances have interpretations that cannot be found by examining their components. Idioms, canned phrases, lexical collocations, and structural formulas are instances of large classes of language utterances whose interpretation require knowledge about She entire phrase independent of its individual words (Becker, 19q5) (Mitchell, 19~71) .We propose as an alternative a model of language use that comes from viewing language processing systems as knowledge-based systems tha£require the representation and organization of large amounts of knowledge about what the utterances of a language mean. This model has the following properties:I. It has knowledge about the meaning of the words of the language, but in addition, much of the system's knowledge is about the meaning of larger forms of u~terancas.2. This knowledge is stored in the form of pattern-concept pairs.A pattern is a phrasal cons~ruc~ oI varyxng degrees of specificity.A concept is a notation that represents the meaning of the phrase.Together, this pair associates different forms of utterances with their meanings.3. The knowledge about language contained in the system is kept separate from the processing strategies that apply this knowledge to the understanding and production tasks.4. The understanding component matches incoming utterances against known patterns, and then uses the concepts associated with the matched patterns to represent the utterance's meaning. | By the term "phrasal language constructs" we refer to those language units of which the language user has s~ecific knowledge. We cannot present our entire classification oF these constructs here. However, our phrasal constructs range greatly in flexibility. For example, fixed expressions like "by and large , the Big Apple (meaning N.Y.C.), and lexical collocations such as "eye dro~per" and "weak safety" allow little or no modificatxonA idioms like "kick the bucket" and "bury the hatchet allow the verb in them to s~pear in various forms-discontinuous dependencies like look ... up" permi~ varying positional relationships of their constituents. All these constructs are phrasal in that the language user must know the meaning of the construct as a whole In order to use it correctly.In the most general case, a phrase may express the usage of a word sense. For example, to express one usage of the verb kick, the phrase "<person> <kick-form> <object>" is used.This denotes a person followed by some verb form inyolving kick (e.g., kick, kicked, would ~ave kicked") followe~"~ some utterance ueno~ing an oojec~.Our notion of a phrasal language construct is similar to a structural formula (Fillmore, 1979 )-However, our criterion for dlr~trl'F/~ing whether a set of forms should be accomodated by the same phrasal pattern is essentially a conceptual one.Since each phrasal pattern in PHRAN is associated with a concept, if the msenlngs of phrases are different, they should be matched by different patterns. If the surface structure of the phrases is similar and they seem to mean the same thing, %hen they should be accomodated by one pattern. At the center of PHRAN is a knowledge base of phrasal patterns.These include literal strings such as "so's your old man"; patterns such as "<nationality> restaurant", and very ~eneral phrases such as "<person> <give> <person> <object> .Associated with each phrasal pattern is a conceptual template.A conceptual template is a piece of meanln~ representation with possible references to pieces of the associated phrasal pattern.For example, associated with the phrasal pattern "<nationality> restaurant" is the conceptual template denoting a restaurant that serves <nationality> type food; associated with the phrasal pattern "<person~> <give> <personJ> <object>" is the conceptual template that denotes a transfer of possession by <person1> of <object> to <personJ> from <person1>. go%ten from the sentence a~d phras~T paz~erns recognized in it.The first indexing mechanism works but it requires that any pattern used to recognize a phrasal expressions be suggested by some word in it. This is unacceptable because it will cause the pattern to be suggested whenever the word it is triggered by is mentioned. The difficulties inherent in such an indexing scheme can be appreciated by considering which word in the phrase "by ana large" should be used to trigger it. Any choice we make will cause the pattern ~o be suggested very often in contexts when it is not appropriate.~nthis form, FHRAN's ~rocessing roughly resembles ELI's (Riesbeck et el, 19V59. We therefore developed the second mechanism. The ~ atterns-concapt pairs of the database are indexed in s ree.As words are read, the pattern suggesting mechanism travels down this tree, choosing branches according to the meanings of the words.It suggests to PHRAN the patterns found at the nodes it has arrived at. The list of nodes is remembered, and when the next word is read the routine continues to branch from them, in addition to starting from the root.In practice, the number of nodes in the list is rather smsll.For example, whenever a noun-phrase is followed by an active form of some verb, the suggesting routine instructs PHRAN to consider the simple declarative forms of the verb. When a noun-phrase is followed by the vero 'to be' followed by the perfective form of some verb, the routine instructs PHRAN to consider the passive uses of the last verb. The phrasal pattern that will recognize the expression "by and large" is found st the node reaches only after seeing those three woras consecutively.In this manner this pattern will be suggested only when neccessary.The main problem with this scheme is that it does not lend itself well to allowing contextual cues to influence the choice of patterns PHRAN should try. This is one area where future research will be concentrates.There are a number of other natural lenguage processing systems that either use some notion of patterns or produce meaning structures as output.We contrast PHRAN w~th some of these.An example of a natural language understanding system that produces declarative meaning representations Ss Riesbeck's "conceptual analyzer" (Riesbeck, 1974 When a word is reed by the system, the routines associated with that word are used to build up a meaning structure that eventually denotes the messing of the entire utterance. 19~) . It receives a sentence as input and ,na]yzes it in several separate "stages". In effect, PARRY replaces the input wi~h sentences of successively simpler form. In %he simplified sentence PARRY searches for patterns, of which there ere two bssic types: patterns used to interpret the whole ~entence, snd those used on~y to interpret parts of ~t {relative clauses, for example).For PARRY, the purpose of the natural language analyzer is only to translate the input into a simplified form that a model of a paranoid person may use to determine an appropriate response. No attempt Js made to model the analyzer itself after a human language user, as we are doing, nor are claims made to this effect.A system attempting to model human language analysis could not permit several unre]e+ed passes, the use of s transition network grsmmsr to interpret only certain sub-strings in the input, or a rule permitting it to simply ignore parts of the input.This theoretical shortcoming of PARRY -hsving separate grammar rules for the complete sentence ~nd for sub-parts o" it -is shsred by Henarix's LYFER (Hendrix. IO77) . LIFER is designed to enable a database to be queried usJn~ 8 subset of the English language.As is t~_ case for PARRY, the natural language ansAysis done by ~Ar~R is not meant to model humans.Rather, its function is to translate the input into instructions and produce s reply as efficiently es possible, and nothing resembling s representation of tne meaning of the input is ever l ormea, u: course the purpose of LIFE~ is not to be th ~ front end of a system that understands coherent texts and which must therefore perform subsequent inference processes.Wh~le LIFER provides s workable solution to the natural language problem in a limited context I msny general problems of language analysis are not adoresseo in that context. SOPHYE (Burton, 1976) was designed to assist students in learning about simple electronic circuits. It can conduct a dialogue with the user in a restricted subset of the English language, and it uses knowledge about patterns of speech to interpret the input. SOPHIE accepts only certain questions and instructions concerning a few tasks.As is the case with LI-FER. the langusge utterances acceptable to the system are restricted to such an extent that many natural language processing problems need not be deelt with and other problems have solutions appropriate only to this context. In addition, SOPHIE does not produce any representation of the meanin~ of the input, and it makes more than one pass on the Input i~morlng unknown words, practices that nave already been crlticized.The augmented finite state transition network (ATN) has been used by a number of researchers to aid in the analysis of natural language sentences (for example, see Woods 1970) .However, most systems that use ATN's incorporate one feature which we find objectioneble on both theoretical and practical grounds. This is the separation of analysis into syntactic and semantic phases.The efficacy and psychological validity of the separation of syntactic and sementicprocessing has been argued at lengthelsewhere (see Schar~ 1975 for example). In addition, most ATN based systems (for .xample Woods' LUNAR program) do not produce represents%ions, but rather, run queries of a data base.In contrast to the systems just described, Wilks' English-French machine ~ranslstor do~s not share several of their shortcomings (Wilks, 197~) . It produces a representation of the meaning of an utterance, and it attempts to deal with unrestricted natural language. The maxn difference between Wilk's system and system we describe is that Wilks' patterns are matched against concepts mentioned in a sentence.To recognize these concepts he attaches representations to words in e dictionary.The problem is that this presupposes that there is a simple correspondence between %he form of a concept and the form of a language utterance.However, it is the fact that this correspondence is not simple that leads to the difficulties we are addressing in our work. In fact, since the correspondence of words to meanings is complex, it would appear ~hat a program like Wilks' translator will even~ually need %he kind of knowledge embodied in PHRAN to complete its analysis.One recent attempt at natural language analysis that radically departs f~om pattern-based approaches is Rieger ' and Small 's system (Smell, 1978) . This system uses word experts rather than patterns as its basic mechsnxsm. ~nelr system acknowledges the enormity of the knowledge base required for language understanding, and proposes s way of addressing the relevant issues. However, the idea of puttin~ as much information as possible under individual words is about as far from our -conception of language analysis as one can get, and we would argue, would exemplify all the problems we have described in word-based systems. | null | lookxng for concepts in the caza oase ~net match the concept it wishes to express. The phrasal patterns associated with these concepts are used to generate the natural language utterance.6. The data-base of pattern-concept pairs is shared by both the unaerstanding mechanism and the mechanism of language production.Other associations besides meanings may be kept along with a phrase. For example, a description of the contexts in which the phrase is an appropriate way to express its meaning may be stored. A erson or situation strongly associated wi~h the phrase may also be tied to it.ANalyzer) is a natural language understanding system based on this view of language use. PNNAN reads English text and produces structures that represent its meaning. As it reads an utterance, PHRAN searches its knowledge base of pattern-conceptpairs for patterns that best interpret the text.The concept portion of these pairs is then used to produce the meaning representation for the utterance.PHRAN has a number of advantages over previous systems: I. The system is able to handle phrasal language units that are awkwardly handled by previous systems but which are found with great frequency in ordinary speech and common natural language texts.2. It is simpler to add new information to the system because control and representation are kept separate. To extend the system, new pattern-concept pairs are simply added to the data-base.A pattern-concept pair consists of a specification of the phrasal unit, an associated concept, and some additional information about how the two are related. When PHRAN instantiates a concept, it creates an item called a term that includes the concept as well as some additional information.A pattern is a sequence of conditions that must hold true for a sequence of terms.A pattern may specify optional terms toq, the place where these may appear, ana what effect (if any) their appearance will have on the properties of the term formea if the pattern is matched. For example, consider the following informal description of one of the patterns suggested by the mention of the verb 'to eat' in certain contexts. Notice that the third term is marked as optional.If it is not present in the text, PHRAN will fill'the OBJECT slot with a default representing generic food.The following is a highly simplified example of how PHRAN processes the sentence "John dropped out of school": First the word "John" is read. "John" matches the patter~ consisting of the literal "John", and the concept associated with this pattern causes a term to be formed that represents a noun phrase and a particular male erson named John.No other patterns were suggested. ~his term is added on to *CONCEPTS, the list of terms PHRAN keeps and which will eventually contain the meaning of the sentence.Thus This new fact is now stored under the last term.Next the word "out" is read. The pattern suggestion mechanism is alerted by the occurence of the verb 'drop' followed by the word 'out', and at this point It instructs PHRAN to consi ;r the pattern I [<person> <DROP> "out" "of" <school> I [ ... ] !The list in *CONCEPT* is checked against this pattern to see if it matches its first two terms, end since that is the case, this fact is stored under the secord term. A term associated with 'out' is now added to *CONCEPT*:< [JOHNI -person, NP] , [DROP -verb] , lOUT ] >The two patterns that have matched up to DROP are checked to see if the new term extends them. This is true only for the second pattern, a~d this fact is stored unde~ the next term.The pattern l<person> <DROP> <object>) is discarded. Now the word "of" is read.A term is formed and added to *CONCEPT*.The pattern that matched to OUT is extended by OF so %he pattern is moved to ~e next term.The word "high" is read and a term is formed and added to *CONCEPt. Now the pattern under OF is compared against HIGH.It doesn't satisfy the next condition. PHRAN reads "school", and the pattern suggestion routine presents PHRAN with two patterns: The two patterns are compared against the last term, and both are matched. The last two terms a~'e removed from *CONCEPT*, and the patterns under 0F are checked to determine which of the two possible meanings we have should be chosen. Patterns are suggested such that the more specific ones appear first, so that the more specific interpretation will be chosen if all patterns match equally well.. 0nly if the second meanin~ (i.e. a school that is high) were explicitly specifled by a previous pattern, would it have been chosen.I. I [ "A term is formed and added to *CONCEPT*, which now contains < [JOHNI -person, NP~ . [DROP -verb] [OUT] , [0FI , [HIGH-SCHOOLI -school, NPJ >The pattern under OF is checked against the last term in *CONCEPT ~.PHRAN finds a complete match, so all the matched terms are removed and replaced by the concept associated with this pattern.*CONCEPT* now contains this concept as the final result:< [ ($SCHOOLING (STUDENT JOHNI) . (SCHOOL HIGH-SCHOOLI) (TERMINATION PREMATURE)) ] > 4.2 Pattern-Concept Pairs In More Detail d.2.1 The Pattern -The pattern portion of a pattern-concept pair consists of a sequence of predicates. These may take one of several forms:which will match only a term representing this exact word.2. A class name (in parentheses); will match any term ~epresenting a member @f this class (e.g. "(FOOD)" or "(PHYSICAL-OBJECT)").~. A pair, the first element of which is a property name end the second is a value; will match any ~e rm hav%ng the required valge of the property e.g. "(Part-0f-Speech VERB)").In addition, we may negate a condition or specify that a conjunction or disjunction of several must hold.The following is one of the patterns which may be suggested by the occurrence of the verb 'give' in an utterance:[(PERSON) (BOOT GIVE) (PERSON) (PNYSOB)I 4.2.1.1Optional Parts -To indicate the presence of optional terms, a list of pattern concept-pairs is inserted into the pattern at the appropriate place. These pairs have as their first element a sub-pattern that will match the optional terms. The second part describes how the new term to be formed if the maxo pattern is found should be modified to reflect the existence of the optional sub-pattern.The concept corresponding to the optional part of a pattern zs treated in a form slightly different from the way we treat regular concept parts of pattern-concept pairs.As usual, it consists of pairs of expressions. The first of each pair will be places as is at ~he end of the properties o~ the term to be formed, end the second will be evaluated first and then placed on that list.For example, another pattern suggested when 'give' is seen is the following:[(PERSON) (ROOT ~VE).~PHYSOB) (~[T0 (PERSON)) (TO (OPT-VAL 2 CD-FORM))])]The terms of this pattern describe a person, the verb give, and then some pnysical object. The last term describes the optional terms, consisting of the word to followed by a person description. Associated with th~ pattern is a concept part that specifies what to do with the optional part if it is there. Here it specifies that the second term in the optional pattern should fill in the TO slot in the conceptualization associated with the whole pattern.This particular pattern need not be a separate pattern in PHRAN from the one that looks for the verb followed by the recipient followed by the object transferred. We often show patterns without all the alternatives that are possible for expositional purposes.Sometimes it is simpler to write the actual patterns separately, although we attach no theoretical significance to thxs disposition.When a pattern is matched. PHRAN removes the terms that match zt from *CONCEPT* and replaces them with a new term, as defined by the second part of the pattern-concept pair. For example, here is a pattern-concept pazr that may be suggested when the verb "eat' is encountered: The concept portion of this pair describes a term covering an entire sentence, and whose ~eaning is the action of INGESTing some food (Schank, 1975) . The next two descriptors specify how $o fill in vaTiable parts of this action. The expression (VALUE n prop) specifies the 'prop' property of the n'th term in the matched sequence of the pattern (not including optional terms).OFT-VAL does the same thing with regards to a matched optional sub-pattern.Thus the concept description above specifies that the actor of the action is to be the term matching the first condition. The object eaten will be either the default concept food, or, if the optional sub-pattern was found, the term corresponding to this suo-pattern.Sometimes a slot in the conceptualization can be filled by a term in a higher level pattern of which this one is an element. For example, when analyzing "John wanted to eat a cupcake" a slight modification of the previous pattern is used to find the meaning of "to eat a cupcake".Since no subject appears In this form, the higher level pattern specifies where it may find it. That is, a pattern associated with "want" looks like the following:{ ~<person> <WANT> <in$initive>]This specifies that the subject of the clause following want is the same as the subject of went.When s word is read PHRAN compares the ~atterns offered by the pattern suggestin¢ routine with the list *CONCEPT* in ~ne manner aescrioea in ~ne example in section 4.1.3. It discards patterns that confllct with *CONCEPT* and retains the rest. Then FH~AN tries to determine which meaning ?f the word to choose, using the "active" patterns (those that have matched up to the point where PHRAN has read). It checks if there is a particular meaning that will match the next slot in some pattern or if no such definition exists if there is a meanin¢ that might be the beginning of a' sequence of terms -whose meaning, as determined via a pa~tern-concept pair, will satisfy the next slot in one of the active patterns.If this is the case, that meanin~ of the word is chosen. Otherwise PHRAR defaults to the fzrst of the meanings of the word.A new term is formed and if it satisfies the next condition in one of these patterns, the appropriate ~atzsrn Is moved to the pattern-list of the new term. If zhe next condition in the pattern indicates that the term speczfled is optional, %hen PHRAN checks for these Optlonal terms, and if it is convinced that they are not present, it checks to see if the new term satisfies the condition following the optional ones in the pattern.When a pattern has been matched completely, PHRAN continues checking all the other patterns on the pattern-list.When it has finished, PHRAN will take the longest pattern that was matched and will consider the concept of its pattern-concept pair to be the meaning of the sequence.If there are several patterns of the same length :hat we re matched PHRAN will group all their meanings together.New patterns are suggested end a disembiguation process follows, exactly as in the case of a new word being read.For example, the words "the big apple", when recognized, will have two possible meanings: one being a large fruit, the other being New York Clty, PHRAN will check the patterns active at that time %0 determine if one of these two meanings satisfies the next condition in one of the patterns.If so, then that meaning will be chosen, Otherwise 'a large fruit' will be the default, as it is the first in the list of possible meanings.In certain cases there is need for slightly modified notions of pattern and concept, the most prominent examples being adverbs and adverbial phrases. Such phrases are also recognized through the use of patterns. However, upon recognizing an adverb, PHRAN searches within the active patterns for an action that it can modify.When such an action is found the concept part of the pair associated with the adverb is used to modify the concept of the original action.Adverbs such as "quickly"and "slowly" are currently defined and can be used to modify conceptualizations containing various actions.Thus PHRAN can handle constructs like:John ate slowly. Ouickly, John left the house. John left the house quickly. John slowly ate the apple. John wanted slowly to eat the apple. Some special cases of negation are handled by specific patterns.For example, the negation of the verb want usually is interpreted ss meaning "want not" -"~ didn't want to go ~o school" means the same thing as "Mary wanted not to go:to school".Thus PHRAN conzains the specifi~ pattern [<person> (do> "not" <want> <inf-phrase>! which Is associated with this interpretation.Retrieving the phrasal pattern matching a particular utterance from PHRAN's knowledge base is sn important problem that we have not yet solved to our complete satisfaction.We find some consolation in the fact that the problem of indexing a large data base is a neccesary and familiar problem for all Enowledge based systems.We have tried two pattern suggestion mechanisms with PHRAN: I. Keying oatterns off individual words or previously matched patterns. | and is in principle sharable by a system for language productioD (Such a mechanism is n~w under construction). Thus adding xnxorma~lon ~o the base should extend the capabz]ities of both mechanisms.4. Because associations other than meanings can be stored along with phrasal unzts, the identification of a phrase can provide contextual clues not otherwise available to subsequent processing mechanisms. 5. The model seems to more adequately reflect the psychological reality of human language use. | Main paper:
introduction:
The problem of constructing a natural language ~rocessing system may be viewed as a problem oz constructing a knowledge-based system.From this orientation, the questions to ask are the following:What sort of knowledge does a system need about a language in order to understand the meaning of an utterance or to produce an utterance in that language? How can this knowledge about one's language best be represented, organized and utilized? Can these tasks be achieved so that the resulting system is easy to add to and modify? Moreover, can the system be made to emulate a human language user?Existing natural language processing systems vary considerably in the kinds of knowledge about language they possess, as well as in how thxs knowledge is represented, organized and utilized. However, most of these systems are based on ideas about language that do not come to grips with the fact that a natural, language processor neeos a great deal of knowledge aoout the meaning of its language's utterances.Part of the problem is that most current natural language systems assume that the meaning of a natural language utterance can be computed as a function of the constituents of the utterance. The basic constituents of utterances are assumed to be words, and all the knowledge the system has about ~he semantics of its language zs stored at the word level (~i~nbaum etal, 1979) (Riesbeck et al, 1975) (Wilks, 197~) (Woods, 1970) . However, many natural language utterances have interpretations that cannot be found by examining their components. Idioms, canned phrases, lexical collocations, and structural formulas are instances of large classes of language utterances whose interpretation require knowledge about She entire phrase independent of its individual words (Becker, 19q5) (Mitchell, 19~71) .We propose as an alternative a model of language use that comes from viewing language processing systems as knowledge-based systems tha£require the representation and organization of large amounts of knowledge about what the utterances of a language mean. This model has the following properties:I. It has knowledge about the meaning of the words of the language, but in addition, much of the system's knowledge is about the meaning of larger forms of u~terancas.2. This knowledge is stored in the form of pattern-concept pairs.A pattern is a phrasal cons~ruc~ oI varyxng degrees of specificity.A concept is a notation that represents the meaning of the phrase.Together, this pair associates different forms of utterances with their meanings.3. The knowledge about language contained in the system is kept separate from the processing strategies that apply this knowledge to the understanding and production tasks.4. The understanding component matches incoming utterances against known patterns, and then uses the concepts associated with the matched patterns to represent the utterance's meaning.
phrasal language constructs:
By the term "phrasal language constructs" we refer to those language units of which the language user has s~ecific knowledge. We cannot present our entire classification oF these constructs here. However, our phrasal constructs range greatly in flexibility. For example, fixed expressions like "by and large , the Big Apple (meaning N.Y.C.), and lexical collocations such as "eye dro~per" and "weak safety" allow little or no modificatxonA idioms like "kick the bucket" and "bury the hatchet allow the verb in them to s~pear in various forms-discontinuous dependencies like look ... up" permi~ varying positional relationships of their constituents. All these constructs are phrasal in that the language user must know the meaning of the construct as a whole In order to use it correctly.In the most general case, a phrase may express the usage of a word sense. For example, to express one usage of the verb kick, the phrase "<person> <kick-form> <object>" is used.This denotes a person followed by some verb form inyolving kick (e.g., kick, kicked, would ~ave kicked") followe~"~ some utterance ueno~ing an oojec~.Our notion of a phrasal language construct is similar to a structural formula (Fillmore, 1979 )-However, our criterion for dlr~trl'F/~ing whether a set of forms should be accomodated by the same phrasal pattern is essentially a conceptual one.Since each phrasal pattern in PHRAN is associated with a concept, if the msenlngs of phrases are different, they should be matched by different patterns. If the surface structure of the phrases is similar and they seem to mean the same thing, %hen they should be accomodated by one pattern. At the center of PHRAN is a knowledge base of phrasal patterns.These include literal strings such as "so's your old man"; patterns such as "<nationality> restaurant", and very ~eneral phrases such as "<person> <give> <person> <object> .Associated with each phrasal pattern is a conceptual template.A conceptual template is a piece of meanln~ representation with possible references to pieces of the associated phrasal pattern.For example, associated with the phrasal pattern "<nationality> restaurant" is the conceptual template denoting a restaurant that serves <nationality> type food; associated with the phrasal pattern "<person~> <give> <personJ> <object>" is the conceptual template that denotes a transfer of possession by <person1> of <object> to <personJ> from <person1>. go%ten from the sentence a~d phras~T paz~erns recognized in it.The first indexing mechanism works but it requires that any pattern used to recognize a phrasal expressions be suggested by some word in it. This is unacceptable because it will cause the pattern to be suggested whenever the word it is triggered by is mentioned. The difficulties inherent in such an indexing scheme can be appreciated by considering which word in the phrase "by ana large" should be used to trigger it. Any choice we make will cause the pattern ~o be suggested very often in contexts when it is not appropriate.~nthis form, FHRAN's ~rocessing roughly resembles ELI's (Riesbeck et el, 19V59. We therefore developed the second mechanism. The ~ atterns-concapt pairs of the database are indexed in s ree.As words are read, the pattern suggesting mechanism travels down this tree, choosing branches according to the meanings of the words.It suggests to PHRAN the patterns found at the nodes it has arrived at. The list of nodes is remembered, and when the next word is read the routine continues to branch from them, in addition to starting from the root.In practice, the number of nodes in the list is rather smsll.For example, whenever a noun-phrase is followed by an active form of some verb, the suggesting routine instructs PHRAN to consider the simple declarative forms of the verb. When a noun-phrase is followed by the vero 'to be' followed by the perfective form of some verb, the routine instructs PHRAN to consider the passive uses of the last verb. The phrasal pattern that will recognize the expression "by and large" is found st the node reaches only after seeing those three woras consecutively.In this manner this pattern will be suggested only when neccessary.The main problem with this scheme is that it does not lend itself well to allowing contextual cues to influence the choice of patterns PHRAN should try. This is one area where future research will be concentrates.There are a number of other natural lenguage processing systems that either use some notion of patterns or produce meaning structures as output.We contrast PHRAN w~th some of these.An example of a natural language understanding system that produces declarative meaning representations Ss Riesbeck's "conceptual analyzer" (Riesbeck, 1974 When a word is reed by the system, the routines associated with that word are used to build up a meaning structure that eventually denotes the messing of the entire utterance. 19~) . It receives a sentence as input and ,na]yzes it in several separate "stages". In effect, PARRY replaces the input wi~h sentences of successively simpler form. In %he simplified sentence PARRY searches for patterns, of which there ere two bssic types: patterns used to interpret the whole ~entence, snd those used on~y to interpret parts of ~t {relative clauses, for example).For PARRY, the purpose of the natural language analyzer is only to translate the input into a simplified form that a model of a paranoid person may use to determine an appropriate response. No attempt Js made to model the analyzer itself after a human language user, as we are doing, nor are claims made to this effect.A system attempting to model human language analysis could not permit several unre]e+ed passes, the use of s transition network grsmmsr to interpret only certain sub-strings in the input, or a rule permitting it to simply ignore parts of the input.This theoretical shortcoming of PARRY -hsving separate grammar rules for the complete sentence ~nd for sub-parts o" it -is shsred by Henarix's LYFER (Hendrix. IO77) . LIFER is designed to enable a database to be queried usJn~ 8 subset of the English language.As is t~_ case for PARRY, the natural language ansAysis done by ~Ar~R is not meant to model humans.Rather, its function is to translate the input into instructions and produce s reply as efficiently es possible, and nothing resembling s representation of tne meaning of the input is ever l ormea, u: course the purpose of LIFE~ is not to be th ~ front end of a system that understands coherent texts and which must therefore perform subsequent inference processes.Wh~le LIFER provides s workable solution to the natural language problem in a limited context I msny general problems of language analysis are not adoresseo in that context. SOPHYE (Burton, 1976) was designed to assist students in learning about simple electronic circuits. It can conduct a dialogue with the user in a restricted subset of the English language, and it uses knowledge about patterns of speech to interpret the input. SOPHIE accepts only certain questions and instructions concerning a few tasks.As is the case with LI-FER. the langusge utterances acceptable to the system are restricted to such an extent that many natural language processing problems need not be deelt with and other problems have solutions appropriate only to this context. In addition, SOPHIE does not produce any representation of the meanin~ of the input, and it makes more than one pass on the Input i~morlng unknown words, practices that nave already been crlticized.The augmented finite state transition network (ATN) has been used by a number of researchers to aid in the analysis of natural language sentences (for example, see Woods 1970) .However, most systems that use ATN's incorporate one feature which we find objectioneble on both theoretical and practical grounds. This is the separation of analysis into syntactic and semantic phases.The efficacy and psychological validity of the separation of syntactic and sementicprocessing has been argued at lengthelsewhere (see Schar~ 1975 for example). In addition, most ATN based systems (for .xample Woods' LUNAR program) do not produce represents%ions, but rather, run queries of a data base.In contrast to the systems just described, Wilks' English-French machine ~ranslstor do~s not share several of their shortcomings (Wilks, 197~) . It produces a representation of the meaning of an utterance, and it attempts to deal with unrestricted natural language. The maxn difference between Wilk's system and system we describe is that Wilks' patterns are matched against concepts mentioned in a sentence.To recognize these concepts he attaches representations to words in e dictionary.The problem is that this presupposes that there is a simple correspondence between %he form of a concept and the form of a language utterance.However, it is the fact that this correspondence is not simple that leads to the difficulties we are addressing in our work. In fact, since the correspondence of words to meanings is complex, it would appear ~hat a program like Wilks' translator will even~ually need %he kind of knowledge embodied in PHRAN to complete its analysis.One recent attempt at natural language analysis that radically departs f~om pattern-based approaches is Rieger ' and Small 's system (Smell, 1978) . This system uses word experts rather than patterns as its basic mechsnxsm. ~nelr system acknowledges the enormity of the knowledge base required for language understanding, and proposes s way of addressing the relevant issues. However, the idea of puttin~ as much information as possible under individual words is about as far from our -conception of language analysis as one can get, and we would argue, would exemplify all the problems we have described in word-based systems.
the knowledge base used by phran is declarative,:
and is in principle sharable by a system for language productioD (Such a mechanism is n~w under construction). Thus adding xnxorma~lon ~o the base should extend the capabz]ities of both mechanisms.4. Because associations other than meanings can be stored along with phrasal unzts, the identification of a phrase can provide contextual clues not otherwise available to subsequent processing mechanisms. 5. The model seems to more adequately reflect the psychological reality of human language use.
overview of phran patterns -:
A pattern-concept pair consists of a specification of the phrasal unit, an associated concept, and some additional information about how the two are related. When PHRAN instantiates a concept, it creates an item called a term that includes the concept as well as some additional information.A pattern is a sequence of conditions that must hold true for a sequence of terms.A pattern may specify optional terms toq, the place where these may appear, ana what effect (if any) their appearance will have on the properties of the term formea if the pattern is matched. For example, consider the following informal description of one of the patterns suggested by the mention of the verb 'to eat' in certain contexts. Notice that the third term is marked as optional.If it is not present in the text, PHRAN will fill'the OBJECT slot with a default representing generic food.The following is a highly simplified example of how PHRAN processes the sentence "John dropped out of school": First the word "John" is read. "John" matches the patter~ consisting of the literal "John", and the concept associated with this pattern causes a term to be formed that represents a noun phrase and a particular male erson named John.No other patterns were suggested. ~his term is added on to *CONCEPTS, the list of terms PHRAN keeps and which will eventually contain the meaning of the sentence.Thus This new fact is now stored under the last term.Next the word "out" is read. The pattern suggestion mechanism is alerted by the occurence of the verb 'drop' followed by the word 'out', and at this point It instructs PHRAN to consi ;r the pattern I [<person> <DROP> "out" "of" <school> I [ ... ] !The list in *CONCEPT* is checked against this pattern to see if it matches its first two terms, end since that is the case, this fact is stored under the secord term. A term associated with 'out' is now added to *CONCEPT*:< [JOHNI -person, NP] , [DROP -verb] , lOUT ] >The two patterns that have matched up to DROP are checked to see if the new term extends them. This is true only for the second pattern, a~d this fact is stored unde~ the next term.The pattern l<person> <DROP> <object>) is discarded. Now the word "of" is read.A term is formed and added to *CONCEPT*.The pattern that matched to OUT is extended by OF so %he pattern is moved to ~e next term.The word "high" is read and a term is formed and added to *CONCEPt. Now the pattern under OF is compared against HIGH.It doesn't satisfy the next condition. PHRAN reads "school", and the pattern suggestion routine presents PHRAN with two patterns: The two patterns are compared against the last term, and both are matched. The last two terms a~'e removed from *CONCEPT*, and the patterns under 0F are checked to determine which of the two possible meanings we have should be chosen. Patterns are suggested such that the more specific ones appear first, so that the more specific interpretation will be chosen if all patterns match equally well.. 0nly if the second meanin~ (i.e. a school that is high) were explicitly specifled by a previous pattern, would it have been chosen.I. I [ "A term is formed and added to *CONCEPT*, which now contains < [JOHNI -person, NP~ . [DROP -verb] [OUT] , [0FI , [HIGH-SCHOOLI -school, NPJ >The pattern under OF is checked against the last term in *CONCEPT ~.PHRAN finds a complete match, so all the matched terms are removed and replaced by the concept associated with this pattern.*CONCEPT* now contains this concept as the final result:< [ ($SCHOOLING (STUDENT JOHNI) . (SCHOOL HIGH-SCHOOLI) (TERMINATION PREMATURE)) ] > 4.2 Pattern-Concept Pairs In More Detail d.2.1 The Pattern -The pattern portion of a pattern-concept pair consists of a sequence of predicates. These may take one of several forms:which will match only a term representing this exact word.2. A class name (in parentheses); will match any term ~epresenting a member @f this class (e.g. "(FOOD)" or "(PHYSICAL-OBJECT)").~. A pair, the first element of which is a property name end the second is a value; will match any ~e rm hav%ng the required valge of the property e.g. "(Part-0f-Speech VERB)").In addition, we may negate a condition or specify that a conjunction or disjunction of several must hold.The following is one of the patterns which may be suggested by the occurrence of the verb 'give' in an utterance:[(PERSON) (BOOT GIVE) (PERSON) (PNYSOB)I 4.2.1.1Optional Parts -To indicate the presence of optional terms, a list of pattern concept-pairs is inserted into the pattern at the appropriate place. These pairs have as their first element a sub-pattern that will match the optional terms. The second part describes how the new term to be formed if the maxo pattern is found should be modified to reflect the existence of the optional sub-pattern.The concept corresponding to the optional part of a pattern zs treated in a form slightly different from the way we treat regular concept parts of pattern-concept pairs.As usual, it consists of pairs of expressions. The first of each pair will be places as is at ~he end of the properties o~ the term to be formed, end the second will be evaluated first and then placed on that list.For example, another pattern suggested when 'give' is seen is the following:[(PERSON) (ROOT ~VE).~PHYSOB) (~[T0 (PERSON)) (TO (OPT-VAL 2 CD-FORM))])]The terms of this pattern describe a person, the verb give, and then some pnysical object. The last term describes the optional terms, consisting of the word to followed by a person description. Associated with th~ pattern is a concept part that specifies what to do with the optional part if it is there. Here it specifies that the second term in the optional pattern should fill in the TO slot in the conceptualization associated with the whole pattern.This particular pattern need not be a separate pattern in PHRAN from the one that looks for the verb followed by the recipient followed by the object transferred. We often show patterns without all the alternatives that are possible for expositional purposes.Sometimes it is simpler to write the actual patterns separately, although we attach no theoretical significance to thxs disposition.When a pattern is matched. PHRAN removes the terms that match zt from *CONCEPT* and replaces them with a new term, as defined by the second part of the pattern-concept pair. For example, here is a pattern-concept pazr that may be suggested when the verb "eat' is encountered: The concept portion of this pair describes a term covering an entire sentence, and whose ~eaning is the action of INGESTing some food (Schank, 1975) . The next two descriptors specify how $o fill in vaTiable parts of this action. The expression (VALUE n prop) specifies the 'prop' property of the n'th term in the matched sequence of the pattern (not including optional terms).OFT-VAL does the same thing with regards to a matched optional sub-pattern.Thus the concept description above specifies that the actor of the action is to be the term matching the first condition. The object eaten will be either the default concept food, or, if the optional sub-pattern was found, the term corresponding to this suo-pattern.Sometimes a slot in the conceptualization can be filled by a term in a higher level pattern of which this one is an element. For example, when analyzing "John wanted to eat a cupcake" a slight modification of the previous pattern is used to find the meaning of "to eat a cupcake".Since no subject appears In this form, the higher level pattern specifies where it may find it. That is, a pattern associated with "want" looks like the following:{ ~<person> <WANT> <in$initive>]This specifies that the subject of the clause following want is the same as the subject of went.When s word is read PHRAN compares the ~atterns offered by the pattern suggestin¢ routine with the list *CONCEPT* in ~ne manner aescrioea in ~ne example in section 4.1.3. It discards patterns that confllct with *CONCEPT* and retains the rest. Then FH~AN tries to determine which meaning ?f the word to choose, using the "active" patterns (those that have matched up to the point where PHRAN has read). It checks if there is a particular meaning that will match the next slot in some pattern or if no such definition exists if there is a meanin¢ that might be the beginning of a' sequence of terms -whose meaning, as determined via a pa~tern-concept pair, will satisfy the next slot in one of the active patterns.If this is the case, that meanin~ of the word is chosen. Otherwise PHRAR defaults to the fzrst of the meanings of the word.A new term is formed and if it satisfies the next condition in one of these patterns, the appropriate ~atzsrn Is moved to the pattern-list of the new term. If zhe next condition in the pattern indicates that the term speczfled is optional, %hen PHRAN checks for these Optlonal terms, and if it is convinced that they are not present, it checks to see if the new term satisfies the condition following the optional ones in the pattern.When a pattern has been matched completely, PHRAN continues checking all the other patterns on the pattern-list.When it has finished, PHRAN will take the longest pattern that was matched and will consider the concept of its pattern-concept pair to be the meaning of the sequence.If there are several patterns of the same length :hat we re matched PHRAN will group all their meanings together.New patterns are suggested end a disembiguation process follows, exactly as in the case of a new word being read.For example, the words "the big apple", when recognized, will have two possible meanings: one being a large fruit, the other being New York Clty, PHRAN will check the patterns active at that time %0 determine if one of these two meanings satisfies the next condition in one of the patterns.If so, then that meaning will be chosen, Otherwise 'a large fruit' will be the default, as it is the first in the list of possible meanings.In certain cases there is need for slightly modified notions of pattern and concept, the most prominent examples being adverbs and adverbial phrases. Such phrases are also recognized through the use of patterns. However, upon recognizing an adverb, PHRAN searches within the active patterns for an action that it can modify.When such an action is found the concept part of the pair associated with the adverb is used to modify the concept of the original action.Adverbs such as "quickly"and "slowly" are currently defined and can be used to modify conceptualizations containing various actions.Thus PHRAN can handle constructs like:John ate slowly. Ouickly, John left the house. John left the house quickly. John slowly ate the apple. John wanted slowly to eat the apple. Some special cases of negation are handled by specific patterns.For example, the negation of the verb want usually is interpreted ss meaning "want not" -"~ didn't want to go ~o school" means the same thing as "Mary wanted not to go:to school".Thus PHRAN conzains the specifi~ pattern [<person> (do> "not" <want> <inf-phrase>! which Is associated with this interpretation.Retrieving the phrasal pattern matching a particular utterance from PHRAN's knowledge base is sn important problem that we have not yet solved to our complete satisfaction.We find some consolation in the fact that the problem of indexing a large data base is a neccesary and familiar problem for all Enowledge based systems.We have tried two pattern suggestion mechanisms with PHRAN: I. Keying oatterns off individual words or previously matched patterns.
the production component expresses itself b[:
lookxng for concepts in the caza oase ~net match the concept it wishes to express. The phrasal patterns associated with these concepts are used to generate the natural language utterance.6. The data-base of pattern-concept pairs is shared by both the unaerstanding mechanism and the mechanism of language production.Other associations besides meanings may be kept along with a phrase. For example, a description of the contexts in which the phrase is an appropriate way to express its meaning may be stored. A erson or situation strongly associated wi~h the phrase may also be tied to it.ANalyzer) is a natural language understanding system based on this view of language use. PNNAN reads English text and produces structures that represent its meaning. As it reads an utterance, PHRAN searches its knowledge base of pattern-conceptpairs for patterns that best interpret the text.The concept portion of these pairs is then used to produce the meaning representation for the utterance.PHRAN has a number of advantages over previous systems: I. The system is able to handle phrasal language units that are awkwardly handled by previous systems but which are found with great frequency in ordinary speech and common natural language texts.2. It is simpler to add new information to the system because control and representation are kept separate. To extend the system, new pattern-concept pairs are simply added to the data-base.
Appendix:
| null | null | null | null | {
"paperhash": [
"birnbaum|problems_in_conceptual_analysis_of_natural_language",
"fillmore|innocence:_a_second_idealization_for_linguistics",
"riesbeck|comprehension_by_computer_:_expectation-based_analysis_of_sentences_in_context",
"becker|the_phrasal_lexicon"
],
"title": [
"Problems in conceptual analysis of natural language",
"Innocence: A Second Idealization for Linguistics",
"Comprehension by computer : expectation-based analysis of sentences in context",
"The Phrasal Lexicon"
],
"abstract": [
"Abstract : This paper reports on some recent developments in natural language analysis. We address such issues as the role of syntax in a semantics-oriented analyzer, achieving a flexible balance of top-down and bottom-up processing, and the role of short term memory. Our results have led to improved algorithms capable of analyzing the kinds of multi-clause inputs found in most text. (Author)",
"The nature of the fit between predictions generated by a theory and the phenomena within its domain can sometimes be assessed only when different sources of explanation can be isolated through one or more idealizations. One such idealization is the simplifying assumption, for the laws of Newtonian mechanics, that the physical bodies whose movements fall within their scope are (or can be treated as) dimensionless particles, not subject to distortion or friction. The empirical laws of elasticity and friction are themselves best formulated against this background idealization.",
"Abstract : ELI (English Language Interpreter) is a natural language parsing program currently used by several story understanding systems. ELI differs from most other parsers in that it: produces meaning representations (using Schank's Conceptual Dependency system) rather than syntactic structures; uses syntactic information only when the meaning can not be obtained directly; talks to other programs that make high level inferences that tie individual events into coherent episodes; uses context-based exceptions (conceptual and syntactic) to control its parsing routines. Examples of texts that ELI has understood, and details of how it works are given.",
"Theoretical linguists have in recent years concentrated their attention on the productive aspect of language, wherein utterances are formed combinatorically from units the size of words or smaller. This paper will focus on the contrary aspect of language, wherein utterances are formed by repetition, modification, and concatenation of previously-known phrases consisting of more than one word. I suspect that we speak mostly by stitching together swatches of text that we have heard before; productive processes have the secondary role of adapting the old phrases to the new situation. The advantage of this point of view is that it has the potential to account for the observed linguistic behavior of native speakers, rather than discounting their actual behavior as irrelevant to their language. In particular, this point of view allows us to concede that most utterances are produced in stereotyped social situations, where the communicative and ritualistic functions of language demand not novelty, but rather an appropriate combination of formulas, cliches, idioms, allusions, slogans, and so forth. Language must have originated in such constrained social contexts, and they are still the predominant arena for language production. Therefore an understanding of the use of phrases is basic to the understanding of language as a whole.You are currently reading a much-abridged version of a paper that will be published elsewhere later."
],
"authors": [
{
"name": [
"L. Birnbaum",
"M. Selfridge"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"C. Fillmore"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"C. Riesbeck",
"R. Schank"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"Joseph D. Becker"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
}
]
}
],
"arxiv_id": [
null,
null,
null,
null
],
"s2_corpus_id": [
"60367908",
"116372826",
"60546035",
"3919430"
],
"intents": [
[],
[
"background",
"methodology"
],
[],
[]
],
"isInfluential": [
false,
false,
false,
false
]
} | Problem: The problem addressed in the paper is the need for a natural language processing system that can understand the meanings of language utterances beyond just individual words, including idioms, collocations, and structural formulas.
Solution: The paper proposes a knowledge-based system called PHRAN (PHrasal ANalyzer) that focuses on phrasal units as the basic processing unit, storing knowledge about the meanings of larger forms of utterances in pattern-concept pairs, separate from processing strategies, to enhance language understanding and production capabilities. | 536 | 0.065299 | null | null | null | null | null | null | null | null |
63d928d6e2b653aa970b0b6ea0f72d9b37273036 | 11773079 | null | {ATN} Grammar Modeling in Applied Linguistics | Au~mentad TrarmitiOn Network grm.n~rs have significant areas of ~mexplored application as a simulation tool for grammar designers. The intent of this paper is to discuss some current efforts in developing a gr=m.~ testing tool for the specialist in linguistics. ~e scope of the system trader discussion isto display structures based on the modeled grarmar. Full language definition with facilitation of semantic interpretation is not within the scope of the systems described in this paper. Application of granrar testing to an applied linguistics research envi~t is enphasized. Extensions to the teaching of linguistics principles and to refinemmt of the primitive All{ f%mctions are also considered. | {
"name": [
"Kehler, T. P. and",
"Woods, R. C."
],
"affiliation": [
null,
null
]
} | null | null | 18th Annual Meeting of the Association for Computational Linguistics | 1980-06-01 | 10 | 4 | null | cedure, the user enters test data, displays structures, the lexicon, and edits the grammr to produce a refined A~] grarmar description. The displayed structures provide a labeled structural inremyretation of the input string based on the lin=~uistic model used. Tracing'of the parse may be used to follow the process of building the structural interpretation. Computational implemm~tation requires giving attention to the details of the interrelationships of gr~.matical rules and the interaction between the grammar rule system and the lexical representation. Testing the grammr against data forces a level of systemization that is significantly more rigorous than discussion oriented evaluation of gra~er sys ~m,.The model provides a meens of organizing strutrural descriptions at any level, from surface syntax to deep propositional inrerpreta=icms.2. A nemmrk m~el may be used Co re~resent different theoretical approaches Co grammr definition.The graphical representation of a gramrar permitted by the neuaork model is a relati~ly clear and precise way to express notions about struc-t~/re.Computational simulation of the gramsr enables systematic tracing of subc~xx~nts and testing against text data.Grimes (2), in a series of linguistics workshops, d~ strafed the utility of the network model ~ in envi-~u~nts wh~e computational testir~ of grammrs was r~t possible. Grimes, along with other c~ntributors to the referenced work, illustrated the flexibility of the ATN in talc analysis of gr~ratical structures. A~ implerentations have nmsCly focused on effective natural language understanding systems, assuming a computationally sophisticated research envir~t. Inplementatiorm are ofte~ in an envirormm~t which requires some indepth ~mderstanding and support of LISP systems. Recently much of the infornmtion on the ATN formalism, applications and techniques for impler~ntation was summarized by Bates (3). Tnc~h ~amy systems have be~ developed, little attention has been giv~ to =eating an interactive grarmar modeling system for an individual with highly developed linguistics skills but poorly developed c~putational skills.The individual involved in field Lir~=%~istics is concerned with developing concise workable descriptions of some corpus of deta in a ~ven language. Perti~,7~ problems in developing rules for incerpreting surface s~-uctn~res are proposed and discussed in relation to the da~a. In field lir~tics applications, this inwives developing a rmxor~my of structural types followed by hypothesizing onderlying rule systems which provide the highest level of data integration at a | The gm~ral dasi~ goal for the grammr rasing sys~ described here is to provide a tool for developing experimentally drive~, systematic representation models of language data. Engineering of a full Lmguage ~erstamdimg system is not the ~f~mm-y focus of the efforts described in this paper. Ideally, one would Like Co provide a tool which would attract applied linguists to use such a syst~n as a simulation environmen= for model developmen=.design goals for the systems described are: The p~totype grammr design sys~ consists of a gram~r gemerator, a~ editor, and a monitor. The f~mction of U%e gr;~.~ editor is to provide a means of defining and mm%iv~lating gr~mar descriptions w~thouc requiring the user to work in a specific programing langu~e env~uL~,=L~. ~e editor is also used to edic lexicons. The editor knows shout the b/N envirormen~ and can provide assistsmce to the user as needed.The monitor's function is co handle input and outpuc of gr~-~ and lexicon files, manage displays and traces of parsir~s, provide o~sultation on the sysran use as needed, and enable the user to cycle from editor to parsing with mi~m,~ effort. The monitor can also be used to provide facilities for studying gram~r efficiemcy. Transportability of the gr~mn~" modeling systsm is established by a progran generator whi~,h enables im-pl~tation in differanc progr~m~ng ~es.Sysr~-sTo deu~lop some understanding on the design amd impleremrmtion requirements for a sysr~n as specified in the previous section, D~o experimenr.al gr~'-~" resting systems have been developed. A partial A~ im-pl~m~nta=ion was dune by ~_hler(A) in a system (SNOPAR) ~dnich provided some interactive gr.~Tr~T and development facilities. SNOPAR imcorporated several of the basic features of a grammr generator and monitor, with a limited editor, a gra-m=~ gererator and a number of other fea=uras.Both SNOPAR and ADEPT are implemenred in SNO~OL and both have been ~:rarmpcrr~ed across opera.rig sysrems (i.e. TOPS-20 co I~M's ~;).For implemm~retion of rex= ediCir~ and program grin,mar gemerar.ion, the S~OBOL& language is reasonable. However, the Lack of ccmprehensive list storage marm@snentis a l~n~tatio~ on the extension of ~ implerenre=ion ~o a full natural language ~mdersr~ sysr~n. Originally, S}~DBOL was used because a suirmble ~ was noC available to the i~plem~r.3.1 SNOPAR SNOPAR prov£des =he following ftmctions: gr~m~.r creation and ecLiting, lexicon oreation end echoing, execution (with some error trapping), Cracing/~t~g2x~ and file handling, lhe grammar creatiun porticm has as am option use of an inrerac=ive grit Co creare an ATN. One of the goals in =he design of ~.~3PAR was to in~'c~,~ce a notation which was easier to read than the LISP reprasemta=ion most frequently used.Two basic formats have been used for wri~ng grabmars in ~qOPA.~. One separates dm conrex~c-free syntax type operations f-con the rests and actions of the grammar. This action block fo=ma~ is of the following gem where arc-type is a CAT, P~RSE or FIN~.~RD e~c., and the test-action-block appears as folluws:=es C-action-b lock sr~re arc-reSt: I action :S(TO(arc-type-bl6d<)) arc-rest ! action :S(TO(arc-rype-block))where an arc-test is a CC~PAR or other test and an action is a ~ or HUILDS type action. Note that m'~ additional intermediare stare is in=roduaed for the test and ac=iuns of the AXN.'lhe more sr~ Jard formic used is ~ve~ as: state-÷ arc-type -~7 con/ition-rest-and-ac=ion-block --7 ne~-staceAn exa~le nmm phrase is given as: The Parse function calls subneu~rks which consist of Parse, C, ac or other arc-types. Structures are initially built through use of the SETR function which uses the top level consti,;:um",c ~ (e.g. NP) rm form a List of the curmti~um~ts referenced by the r~g~j-rer ~ in ~-~x. All registers are =reared as stacks. ~he ~UILDS function may use the implici= r~d'~rer ham sequence as a default to build ~he named structure. ~he 'cop level constitn~nc ~ (i.e. NP) cunr2dms a List of the regisrers set during the parse which becomes the default list for struuture building. ~ere are global stacks for history m~ng and bank up. functions.NPTypically, for other ~um the ~=1 creation of a gr~r by a r~ user, the A~q func~ library of system is used in conjunction wi~h a system editor for gr~.=.~ development. Several A~q gr~n-s have beem wri=r~n with this system. | Utilization of the A~N as a grammr definition syst~n in linguistics and language education is still aC an early stage of development. s. Proposed model gr~,,ars can be evaluated for efficiency of representation and exzend-ibilit7 to a larger corpus of data. Essential Co this approad% is the existence of a self-contained easy-Co-use transportable AII~ modeling systems. In the following sections some example applications of gr~m~r r~sting co field lir~=uistics exercises and application to modeling a language indigerJoos to the Philippines ~ given.Typical exercises in a first course in field linguistics give the student a series of phrases or sentences in a language not: known to the student. T~c analysis of the data is to be done producing a set of formul~q for constituent types and the hierarch~a] relationship of ourmtituenCs.In this partic,1]nr case a r~-~nic analysis is dune. Consider the following three sentences selected from Apinaye exercise (Problem I00) 7 sentence in the exercise may be entered, making | null | S ~, an effort co make am e~sy-to-use s~r~d~on tool for lir~u£s~, the basic concepts of SNOPAR were exrer~ed by Woods (5) co a full A~N implememtacion in a sys~ called ADEPT. ADEPT is a sysr.em for ger~ratimg A~I~ program through ~he use of a rmU~rk edir.=r, lexicon ec~tor,error correction and detection _~n%-~z.~:, and a monitor for execution of the griT. Star.e, r~twork, arid arc ec~i~Lr~ are dlst/_n=oz~shed by conrex= and the ar~-.~nrs of ~he E, D, or I c~m~nds. For a previously undefined E net causes definition of ~m ne=#ork. ~e user must specify all states in the rmt~x)rk before staruir~. ~l~e editor processes the srmre list requesting arc relations and arc infor-mcion such as the tests or arc actions. ~he states ere used ro help d~m~ose e~-~uL~ caused by misspelling ~f a srm~e or omission of a sta~e.Once uhe ~=~rk is defined, arcs ~ay by edired by specifying =he origin and dest/na=ion of the arc. ~e arc infor~mcion is presemr~d in =he following order: arc destination, arc type, arc test and arc actions. Each of dlese items is displayed, permit~ir~ rile user to change values on the arc list by ~yping in the needed infor=mtion. t~itiple arcs between states are differentiated by specifying the order nu~er of the arc or by displaying all arcs to the user and requesting selection of the desired arc.N~ arcs are inserted in the network by U~e I mand. -vhenever an arc insert is performed all arcs from the state are nurbered and displayed. After the user specifies the nu~er of the arc that the n~ arc is to follow, the arc information is entered.Arcs nay be reordered by specifying the starting state for the arcs of inCerast using the 0 command. ~e user is then requested ~o specify the r~ ordering of ~Se arcs.Insertion and deletion of a state requires that the editor determine the sta~as which r.'my be reached the new state as well as finding which arcs terminate on the n~4 state. Once this information has been established, the arc information may be entered.~nen a state is deleted, all arcs which inmediately leave the state or which enter the state fr~n other stares are removed. Error ¢onditioos exist~ in the network as a result of the deletion are then reported. The user then ei~er verifies the requested deletion and corrects any errors or cancels the request.Grarmar files are stored in a list format. ~he PUT cou-n,ar.d causes all networP.s currently defined to be written out to a file. GET will read in and define a grammar. If the net~..~ork is already defined, the network is r~:~: read in.By placing a series of checking functions in an A~N editor, it is possible to fil~er out many potential errors before a grammr is rested. ~he user is able to focus on the grammr model and not on the specific pro-gra~ming requir~r~nts. A monitor progra~ provides a top level interface to the user once a grammar is defined for parsing sentances.In addition, the monitor program manages the stacks as well as the S~qD, LIFT and HOLD lists for the network gr~m~sr. 9wi~ches may be set to control the tracing of the parse.An additional feature of the ~.bods ADF.Yr syst~n is the use of easy to read displays for the lexicon and gra'iIr~. An exar~le arC is shown:(~)--CAT('DET')--(A_nJ) • ~qO TESI'S. ~ ACTICNS SErR('DEr' )ADEPT ~has be~ used to develop a small gr=~,~r of English. Future exp~ts ere planned for using ADEPT in an linguistics applications oriented m~iron-n~nt. | Main paper:
desi=~ consideratiors:
The gm~ral dasi~ goal for the grammr rasing sys~ described here is to provide a tool for developing experimentally drive~, systematic representation models of language data. Engineering of a full Lmguage ~erstamdimg system is not the ~f~mm-y focus of the efforts described in this paper. Ideally, one would Like Co provide a tool which would attract applied linguists to use such a syst~n as a simulation environmen= for model developmen=.design goals for the systems described are: The p~totype grammr design sys~ consists of a gram~r gemerator, a~ editor, and a monitor. The f~mction of U%e gr;~.~ editor is to provide a means of defining and mm%iv~lating gr~mar descriptions w~thouc requiring the user to work in a specific programing langu~e env~uL~,=L~. ~e editor is also used to edic lexicons. The editor knows shout the b/N envirormen~ and can provide assistsmce to the user as needed.The monitor's function is co handle input and outpuc of gr~-~ and lexicon files, manage displays and traces of parsir~s, provide o~sultation on the sysran use as needed, and enable the user to cycle from editor to parsing with mi~m,~ effort. The monitor can also be used to provide facilities for studying gram~r efficiemcy. Transportability of the gr~mn~" modeling systsm is established by a progran generator whi~,h enables im-pl~tation in differanc progr~m~ng ~es.Sysr~-sTo deu~lop some understanding on the design amd impleremrmtion requirements for a sysr~n as specified in the previous section, D~o experimenr.al gr~'-~" resting systems have been developed. A partial A~ im-pl~m~nta=ion was dune by ~_hler(A) in a system (SNOPAR) ~dnich provided some interactive gr.~Tr~T and development facilities. SNOPAR imcorporated several of the basic features of a grammr generator and monitor, with a limited editor, a gra-m=~ gererator and a number of other fea=uras.Both SNOPAR and ADEPT are implemenred in SNO~OL and both have been ~:rarmpcrr~ed across opera.rig sysrems (i.e. TOPS-20 co I~M's ~;).For implemm~retion of rex= ediCir~ and program grin,mar gemerar.ion, the S~OBOL& language is reasonable. However, the Lack of ccmprehensive list storage marm@snentis a l~n~tatio~ on the extension of ~ implerenre=ion ~o a full natural language ~mdersr~ sysr~n. Originally, S}~DBOL was used because a suirmble ~ was noC available to the i~plem~r.3.1 SNOPAR SNOPAR prov£des =he following ftmctions: gr~m~.r creation and ecLiting, lexicon oreation end echoing, execution (with some error trapping), Cracing/~t~g2x~ and file handling, lhe grammar creatiun porticm has as am option use of an inrerac=ive grit Co creare an ATN. One of the goals in =he design of ~.~3PAR was to in~'c~,~ce a notation which was easier to read than the LISP reprasemta=ion most frequently used.Two basic formats have been used for wri~ng grabmars in ~qOPA.~. One separates dm conrex~c-free syntax type operations f-con the rests and actions of the grammar. This action block fo=ma~ is of the following gem where arc-type is a CAT, P~RSE or FIN~.~RD e~c., and the test-action-block appears as folluws:=es C-action-b lock sr~re arc-reSt: I action :S(TO(arc-type-bl6d<)) arc-rest ! action :S(TO(arc-rype-block))where an arc-test is a CC~PAR or other test and an action is a ~ or HUILDS type action. Note that m'~ additional intermediare stare is in=roduaed for the test and ac=iuns of the AXN.'lhe more sr~ Jard formic used is ~ve~ as: state-÷ arc-type -~7 con/ition-rest-and-ac=ion-block --7 ne~-staceAn exa~le nmm phrase is given as: The Parse function calls subneu~rks which consist of Parse, C, ac or other arc-types. Structures are initially built through use of the SETR function which uses the top level consti,;:um",c ~ (e.g. NP) rm form a List of the curmti~um~ts referenced by the r~g~j-rer ~ in ~-~x. All registers are =reared as stacks. ~he ~UILDS function may use the implici= r~d'~rer ham sequence as a default to build ~he named structure. ~he 'cop level constitn~nc ~ (i.e. NP) cunr2dms a List of the regisrers set during the parse which becomes the default list for struuture building. ~ere are global stacks for history m~ng and bank up. functions.NPTypically, for other ~um the ~=1 creation of a gr~r by a r~ user, the A~q func~ library of system is used in conjunction wi~h a system editor for gr~.=.~ development. Several A~q gr~n-s have beem wri=r~n with this system.
adept:
S ~, an effort co make am e~sy-to-use s~r~d~on tool for lir~u£s~, the basic concepts of SNOPAR were exrer~ed by Woods (5) co a full A~N implememtacion in a sys~ called ADEPT. ADEPT is a sysr.em for ger~ratimg A~I~ program through ~he use of a rmU~rk edir.=r, lexicon ec~tor,error correction and detection _~n%-~z.~:, and a monitor for execution of the griT. Star.e, r~twork, arid arc ec~i~Lr~ are dlst/_n=oz~shed by conrex= and the ar~-.~nrs of ~he E, D, or I c~m~nds. For a previously undefined E net causes definition of ~m ne=#ork. ~e user must specify all states in the rmt~x)rk before staruir~. ~l~e editor processes the srmre list requesting arc relations and arc infor-mcion such as the tests or arc actions. ~he states ere used ro help d~m~ose e~-~uL~ caused by misspelling ~f a srm~e or omission of a sta~e.Once uhe ~=~rk is defined, arcs ~ay by edired by specifying =he origin and dest/na=ion of the arc. ~e arc infor~mcion is presemr~d in =he following order: arc destination, arc type, arc test and arc actions. Each of dlese items is displayed, permit~ir~ rile user to change values on the arc list by ~yping in the needed infor=mtion. t~itiple arcs between states are differentiated by specifying the order nu~er of the arc or by displaying all arcs to the user and requesting selection of the desired arc.N~ arcs are inserted in the network by U~e I mand. -vhenever an arc insert is performed all arcs from the state are nurbered and displayed. After the user specifies the nu~er of the arc that the n~ arc is to follow, the arc information is entered.Arcs nay be reordered by specifying the starting state for the arcs of inCerast using the 0 command. ~e user is then requested ~o specify the r~ ordering of ~Se arcs.Insertion and deletion of a state requires that the editor determine the sta~as which r.'my be reached the new state as well as finding which arcs terminate on the n~4 state. Once this information has been established, the arc information may be entered.~nen a state is deleted, all arcs which inmediately leave the state or which enter the state fr~n other stares are removed. Error ¢onditioos exist~ in the network as a result of the deletion are then reported. The user then ei~er verifies the requested deletion and corrects any errors or cancels the request.Grarmar files are stored in a list format. ~he PUT cou-n,ar.d causes all networP.s currently defined to be written out to a file. GET will read in and define a grammar. If the net~..~ork is already defined, the network is r~:~: read in.By placing a series of checking functions in an A~N editor, it is possible to fil~er out many potential errors before a grammr is rested. ~he user is able to focus on the grammr model and not on the specific pro-gra~ming requir~r~nts. A monitor progra~ provides a top level interface to the user once a grammar is defined for parsing sentances.In addition, the monitor program manages the stacks as well as the S~qD, LIFT and HOLD lists for the network gr~m~sr. 9wi~ches may be set to control the tracing of the parse.An additional feature of the ~.bods ADF.Yr syst~n is the use of easy to read displays for the lexicon and gra'iIr~. An exar~le arC is shown:(~)--CAT('DET')--(A_nJ) • ~qO TESI'S. ~ ACTICNS SErR('DEr' )ADEPT ~has be~ used to develop a small gr=~,~r of English. Future exp~ts ere planned for using ADEPT in an linguistics applications oriented m~iron-n~nt.
experiments in grammar ~deling:
Utilization of the A~N as a grammr definition syst~n in linguistics and language education is still aC an early stage of development. s. Proposed model gr~,,ars can be evaluated for efficiency of representation and exzend-ibilit7 to a larger corpus of data. Essential Co this approad% is the existence of a self-contained easy-Co-use transportable AII~ modeling systems. In the following sections some example applications of gr~m~r r~sting co field lir~=uistics exercises and application to modeling a language indigerJoos to the Philippines ~ given.Typical exercises in a first course in field linguistics give the student a series of phrases or sentences in a language not: known to the student. T~c analysis of the data is to be done producing a set of formul~q for constituent types and the hierarch~a] relationship of ourmtituenCs.In this partic,1]nr case a r~-~nic analysis is dune. Consider the following three sentences selected from Apinaye exercise (Problem I00) 7 sentence in the exercise may be entered, making
:
cedure, the user enters test data, displays structures, the lexicon, and edits the grammr to produce a refined A~] grarmar description. The displayed structures provide a labeled structural inremyretation of the input string based on the lin=~uistic model used. Tracing'of the parse may be used to follow the process of building the structural interpretation. Computational implemm~tation requires giving attention to the details of the interrelationships of gr~.matical rules and the interaction between the grammar rule system and the lexical representation. Testing the grammr against data forces a level of systemization that is significantly more rigorous than discussion oriented evaluation of gra~er sys ~m,.The model provides a meens of organizing strutrural descriptions at any level, from surface syntax to deep propositional inrerpreta=icms.2. A nemmrk m~el may be used Co re~resent different theoretical approaches Co grammr definition.The graphical representation of a gramrar permitted by the neuaork model is a relati~ly clear and precise way to express notions about struc-t~/re.Computational simulation of the gramsr enables systematic tracing of subc~xx~nts and testing against text data.Grimes (2), in a series of linguistics workshops, d~ strafed the utility of the network model ~ in envi-~u~nts wh~e computational testir~ of grammrs was r~t possible. Grimes, along with other c~ntributors to the referenced work, illustrated the flexibility of the ATN in talc analysis of gr~ratical structures. A~ implerentations have nmsCly focused on effective natural language understanding systems, assuming a computationally sophisticated research envir~t. Inplementatiorm are ofte~ in an envirormm~t which requires some indepth ~mderstanding and support of LISP systems. Recently much of the infornmtion on the ATN formalism, applications and techniques for impler~ntation was summarized by Bates (3). Tnc~h ~amy systems have be~ developed, little attention has been giv~ to =eating an interactive grarmar modeling system for an individual with highly developed linguistics skills but poorly developed c~putational skills.The individual involved in field Lir~=%~istics is concerned with developing concise workable descriptions of some corpus of deta in a ~ven language. Perti~,7~ problems in developing rules for incerpreting surface s~-uctn~res are proposed and discussed in relation to the da~a. In field lir~tics applications, this inwives developing a rmxor~my of structural types followed by hypothesizing onderlying rule systems which provide the highest level of data integration at a
Appendix:
| null | null | null | null | {
"paperhash": [
"kehler|snopar:_a_grammar_testing_system"
],
"title": [
"SNOPAR: A Grammar Testing System"
],
"abstract": [
"grammar testing"
],
"authors": [
{
"name": [
"T. Kehler"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
}
]
}
],
"arxiv_id": [
null
],
"s2_corpus_id": [
"219303767"
],
"intents": [
[]
],
"isInfluential": [
false
]
} | - Problem: The paper discusses the underexplored application of Augmented Transition Network (ATN) grammars as a simulation tool for grammar designers in the field of linguistics.
- Solution: The hypothesis is that the development of a grammar testing tool based on ATN can be beneficial for specialists in linguistics, particularly in applied linguistics research environments, for refining grammar descriptions and enhancing linguistic principles teaching. | 536 | 0.007463 | null | null | null | null | null | null | null | null |
2241eaf7d2cb3d9ccb197357513495723644c12b | 13939843 | null | On the Existence of Primitive Meaning Units | Knowledge representation schemes are either based on a set of primitives or not. The decision of whether or not to have a primitive-based scheme is crucial since it affects the knowledge that is stored and how that knowledge may be processed. We suggest that a knowledge representation scheme may not initially have primitives, but may evolve into a prlmltive-based scheme by inferring a set of primitive meaning units based on previous experience. We describe a program that infers its own primitive set and discuss how the inferred primitives may affect the organization of existing information and the subsequent incorporation of new information. | {
"name": [
"Salveter, Sharon C."
],
"affiliation": [
null
]
} | null | null | 18th Annual Meeting of the Association for Computational Linguistics | 1980-06-01 | 2 | 4 | null | All representation systems must have primitives of some sort, and we can see different types of primitives at different levels. Some primitives are purely structural and have little inherent associated semantics. That is, the primitives are at such a low level that there are no semantics pre-deflned for the primitives other than how they may combine. We call these primitives structural primitives. On the other hand, semantic primitives have both structural and semantic components. The structures are defined on a higher level and come with pre-attached procedures (their semantics) that indicate what they "mean," that is, how they are to be meaningfully processed. What makes primitives semantic is this association of procedures with structures, since the procedures operating on the structures give them meaning.In a primitive-based scheme, we design both a set of structures and their semantics to describe a specific environment.There are two problems with pre-defining primitives. First, the choice of primitives may be structurally inadequate. That is, they may limit what can be represented. For example, if we have a set of rectilinear primitives, it is difficult to represent objects in a sphere world. The second problem may arise even if we have a structurally adequate set of primitives. I_n this case the primitives may be defined on too low a level to be useful. For example, we may define atoms as our primitives and specify how atoms interact as their semantics. Now we may adequately describe a rubber ball structurally, hut we will have great difficulty describing the action of a rolling ball. We would like a set of semantic primitives at a level both structurally and semantically appropriate to the world we are describing.Schank [1972] has proposed a powerful primitive-based knowledge representation scheme called conceptual dependency. Several natural language understanding programs have been written that use conceptual dependency as their underlying method of knowledge representation. These programs are among the most successful at natural language understanding. Although Schank does not claim that his primitives constitute the only possible set, he does claim that some set of primitives is necessary in a general knowledge representation scheme.Our claim is that any advanced, sophisticated or rich memory is likely to be decomposable into primitives, since they seem to be a reasonable and efficient method for storing knowledge. However, this set of after-thefact primitives need not be pre-defined or innate to a representation scheme; the primitives may be learned and therefore vary depending on early experiences.We really have two problems: inferring from early experiences a set of structural primitives at an appropriate descriptive level and learning the semantics to associate with these structural primitives.In this paper we shall only address the first problem. Even though we will not address the semantics attachment task, we will describe a method that yields the minimal structural units with which we will want to associate semantics. We feel that since the inferred structural primitives will be appropriate for describing a partitular environment, they will have appropriate semantics and that unlike pro-defined primitives, these learned primitives are guaranteed to be at the appropriate level for a given descriptive task. Identifying the structural primitives is the first step (probably a parallel step) in identifylng semantic primitives, which are composed of structural units and associated procedures that 81ve the structures meaning.This thesis developed while investigating learning strategies. Moran [Salveter 1979 ] is a program that learns frame-like structures that represent verb meanings. We chose a simple representative frame-like knowledge representation for Moran to learn. We chose a primitive-free scheme in order not to determine the level of detail at which the world must be described.As Moran learned, its knowledge base, the verb world, evolved from nothing to a rich interconnection of frame structures that represent various senses of different root verbs. When the verb world was "rich enough" (a heuristic decision), Moran detected substructures, which we call building blocks, that were frequently used in the representations of many verb senses across root verb boundaries. These building blocks can be used as after-the-fact primitives. The knowledge representation scheme thus evolves from a primitivefree state to a hybrid state. Importantly, the building blocks are at the level of description appropriate Co how the world was described to Moran. Now Mor~ may reorganize the interconnected frames that make up the verb world with respect co the building blocks. This reorganizaclon renulcs in a uniform identification of the co--alleles and differences of the various meanings of different root: verbs. As l enrning continues the new knowledge incorporated into the verb world will also be scored, as ,-~ch as possible, with respect to the buildins blocks; when processing subsequent input, Moran first tries to use a on~inatlon of the building blocks to represent the meaning of each new situation iC encoiJ~Cer8 • A sac of building blocks, once inferred, need noc be fixed forever; the search for more building blocks may continue as the knowledge base becomes richer. A different, "better," set of building blocks may be inferred later from the richer knowledge and all knowledge reorganized with respect to them. If we can assume that initial inputs are representaClve of future inputs, subsequent processing will approach that of primitivebased systems. | A crucial decision in the design of a knowledge representation is whether to base it on primitives. A primitive-based scheme postulates a pre-defined set of meaning structures, combination rules and procedures. The primitives may combine according to the rules into more complex representational structures, the procedures interpret what those structures mean. A primltive-free scheme, on the other hand, does not build complex structures from standard building blocks; instead, information is gathered from any available source, such as input and information in previously built meaning structures.A hybrid approach postulates a small set of pro-defined meaning units that may be used if applicable and convenient, but is not limited to those units. Such a representation scheme is not truly prlmitive-based since the word "primitive" implies a complete set of pre-deflned meaning units that are the onl 7 ones available for construction. However, we will call this hybrid approach a primitive-based scheme, since it does postulate some pro-defined meaning units that are used in the same manner as primitives. | Moran is able to "view" a world that is a room; the room Contains people and objects, Moran has pre-defined knowledge of the contents of the room. 3) a parsed sentence thac describes the action thac occured in the two-snapshot sequence.The learning task is to associate a frame-like structure, called a Conceptual Meaning Structure (CMS), with each root verb it enco,mcers. A CMS is a directed acyclic graph that represents the types of entities chat participate in an action and the changes the entities undergo during the action.The ~s are organized so thac the similarities among various senses of a given root verb are expllcicly represented b 7 sharing nodes in a graph. A CMS is organized into two par~s: an ar~,-~-cs graph and an effects graph. The arguments graph stores cases and case slot restrictions, the effects graph stores a description of what happens co the entities described in the arg,,m~,~Cs graph when an action "takes place." A sin~llfled example of a possible ~S for the verb "throw" is shown in Figure i . Sense i, composed of argument and effect nodes labelled A, W and X can represent '~kr 7 throws the ball."Ic show thac during sense 1 of the actlan "throw," a human agent remains at a location while a physical object changes location from where the Agent is to another location.The Agent changes from being in a stare of physical contact with the Object co not being in physical contact with ic. Sense 2 is composed of nodes labelled A, B, W and Y; It might represent "Figaro throws the ball co E-Istin." Sense 3, composed of nodes labelled A, B, C, W, X and Z, could represent "Sharon threw the terminal at Raphael." of the similarity. Similarities among verbs that are close in meaning, but not synonyms, are not represented; the fact that "move" and "throw" are related is not obvious to Moran.A primitive meaning unit, or building block, should be useful for describing a large number of different meanings. Moran attempts to identify those structures that have been useful descriptors. At a certain point in the learning process, currently arbitrarily chosen by the h.m;un trainer, Moran looks for building blocks that have been used to describe a number of different root verbs. This search for building blocks crosses CMS boundaries and occurs only when memory is rich enough for some global decisions to be made.Moran was presented with twenty senses of four root verbs: move, throw, carry and buy. Moran chose the following effects as building blocks: Since Moran has only been presented with a small number of verbs of movement, it is not surprising that the building blocks it chooses describe Agents and Objects moving about the environmen= and their interaction with each other. A possible criticism is that the chosen building blocks are artifacts of the particular descrlptions that were given to Moran. We feel this is an advantage rather than a drawback, since Moran must assume that the world is described to it on a level that will be appropriate for subsequent processing.i) Agent (h,In Schank's conceptual dependency scheme, verbs of movement are often described with PTRANS and PROPEL. ~t is interesting that some of the building blocks Moran inferred seem to be subparts of the structures of PTRANS and PROPEL. For example, the conceptual dependency for "X throw Z at Y" is:) Y | D X~--) PROPEL +.S-Z ( J ! (Xwhere X and Y are b,,m"ns and Z is a physical object. see the object, Z, changing from the location of X to that of Y. Thus, the conceptual dependency subpart:We ) <o z <D J appears to be approximated by building block ~3 where the Object changes location. Moran would recoEnize that the location change is from the location of the Agent to the location of the indirect object by the interaction of building block #3 with other buildlng blocks and effects that participate in the action description.Similarly, the conceptual dependency for "X move Z to W" is :z<~)ioc(w)where X and Z have the same restrictions as above and W is a location. Again we see an object changing location; a co,~-on occuzence in movement and a building block Moran identified. | We are currently modifying Moran so that the identified building blocks are used to process subsequent input. That is, as new situations are encountered, Moran will try to describe them as much as possible in terms of the building blocks. It will be interesting to see how these descriptions differ from the ones Moran would have constructed if the building blocks had not been available. We shall also investigate how the existence of the building blocks affects processing time. As a cognitive model, inferred primitives may account for the effects of "bad teaching," that is, an unfortunate sequence of examples of a new concept. If examples are so disparate that few building blocks exist, or so unrepresentative that the derived building blocks are useless for future inputs, then the after-the-fact primitives will impede efficient representation. The knowledge organization will not tie together what we have experienced in the past or predict that we will experience in the future. Although the learning program could infer more useful building blocks at a later timeg that process is expensive, time-consuming and may be unable to replace information lost because of poor building blocks chosen earlier. In general, however, we must assume that our world is described at a level appropriate to how we must process it. If that is the case, then inferring a set of primitives is an advantageous strateEy. | null | Main paper:
what is a primitive?:
All representation systems must have primitives of some sort, and we can see different types of primitives at different levels. Some primitives are purely structural and have little inherent associated semantics. That is, the primitives are at such a low level that there are no semantics pre-deflned for the primitives other than how they may combine. We call these primitives structural primitives. On the other hand, semantic primitives have both structural and semantic components. The structures are defined on a higher level and come with pre-attached procedures (their semantics) that indicate what they "mean," that is, how they are to be meaningfully processed. What makes primitives semantic is this association of procedures with structures, since the procedures operating on the structures give them meaning.In a primitive-based scheme, we design both a set of structures and their semantics to describe a specific environment.There are two problems with pre-defining primitives. First, the choice of primitives may be structurally inadequate. That is, they may limit what can be represented. For example, if we have a set of rectilinear primitives, it is difficult to represent objects in a sphere world. The second problem may arise even if we have a structurally adequate set of primitives. I_n this case the primitives may be defined on too low a level to be useful. For example, we may define atoms as our primitives and specify how atoms interact as their semantics. Now we may adequately describe a rubber ball structurally, hut we will have great difficulty describing the action of a rolling ball. We would like a set of semantic primitives at a level both structurally and semantically appropriate to the world we are describing.
inferring an appropriate primitive set:
Schank [1972] has proposed a powerful primitive-based knowledge representation scheme called conceptual dependency. Several natural language understanding programs have been written that use conceptual dependency as their underlying method of knowledge representation. These programs are among the most successful at natural language understanding. Although Schank does not claim that his primitives constitute the only possible set, he does claim that some set of primitives is necessary in a general knowledge representation scheme.Our claim is that any advanced, sophisticated or rich memory is likely to be decomposable into primitives, since they seem to be a reasonable and efficient method for storing knowledge. However, this set of after-thefact primitives need not be pre-defined or innate to a representation scheme; the primitives may be learned and therefore vary depending on early experiences.We really have two problems: inferring from early experiences a set of structural primitives at an appropriate descriptive level and learning the semantics to associate with these structural primitives.In this paper we shall only address the first problem. Even though we will not address the semantics attachment task, we will describe a method that yields the minimal structural units with which we will want to associate semantics. We feel that since the inferred structural primitives will be appropriate for describing a partitular environment, they will have appropriate semantics and that unlike pro-defined primitives, these learned primitives are guaranteed to be at the appropriate level for a given descriptive task. Identifying the structural primitives is the first step (probably a parallel step) in identifylng semantic primitives, which are composed of structural units and associated procedures that 81ve the structures meaning.This thesis developed while investigating learning strategies. Moran [Salveter 1979 ] is a program that learns frame-like structures that represent verb meanings. We chose a simple representative frame-like knowledge representation for Moran to learn. We chose a primitive-free scheme in order not to determine the level of detail at which the world must be described.As Moran learned, its knowledge base, the verb world, evolved from nothing to a rich interconnection of frame structures that represent various senses of different root verbs. When the verb world was "rich enough" (a heuristic decision), Moran detected substructures, which we call building blocks, that were frequently used in the representations of many verb senses across root verb boundaries. These building blocks can be used as after-the-fact primitives. The knowledge representation scheme thus evolves from a primitivefree state to a hybrid state. Importantly, the building blocks are at the level of description appropriate Co how the world was described to Moran. Now Mor~ may reorganize the interconnected frames that make up the verb world with respect co the building blocks. This reorganizaclon renulcs in a uniform identification of the co--alleles and differences of the various meanings of different root: verbs. As l enrning continues the new knowledge incorporated into the verb world will also be scored, as ,-~ch as possible, with respect to the buildins blocks; when processing subsequent input, Moran first tries to use a on~inatlon of the building blocks to represent the meaning of each new situation iC encoiJ~Cer8 • A sac of building blocks, once inferred, need noc be fixed forever; the search for more building blocks may continue as the knowledge base becomes richer. A different, "better," set of building blocks may be inferred later from the richer knowledge and all knowledge reorganized with respect to them. If we can assume that initial inputs are representaClve of future inputs, subsequent processing will approach that of primitivebased systems.
an overview of moran:
Moran is able to "view" a world that is a room; the room Contains people and objects, Moran has pre-defined knowledge of the contents of the room. 3) a parsed sentence thac describes the action thac occured in the two-snapshot sequence.The learning task is to associate a frame-like structure, called a Conceptual Meaning Structure (CMS), with each root verb it enco,mcers. A CMS is a directed acyclic graph that represents the types of entities chat participate in an action and the changes the entities undergo during the action.The ~s are organized so thac the similarities among various senses of a given root verb are expllcicly represented b 7 sharing nodes in a graph. A CMS is organized into two par~s: an ar~,-~-cs graph and an effects graph. The arguments graph stores cases and case slot restrictions, the effects graph stores a description of what happens co the entities described in the arg,,m~,~Cs graph when an action "takes place." A sin~llfled example of a possible ~S for the verb "throw" is shown in Figure i . Sense i, composed of argument and effect nodes labelled A, W and X can represent '~kr 7 throws the ball."Ic show thac during sense 1 of the actlan "throw," a human agent remains at a location while a physical object changes location from where the Agent is to another location.The Agent changes from being in a stare of physical contact with the Object co not being in physical contact with ic. Sense 2 is composed of nodes labelled A, B, W and Y; It might represent "Figaro throws the ball co E-Istin." Sense 3, composed of nodes labelled A, B, C, W, X and Z, could represent "Sharon threw the terminal at Raphael." of the similarity. Similarities among verbs that are close in meaning, but not synonyms, are not represented; the fact that "move" and "throw" are related is not obvious to Moran.
preliminary results:
A primitive meaning unit, or building block, should be useful for describing a large number of different meanings. Moran attempts to identify those structures that have been useful descriptors. At a certain point in the learning process, currently arbitrarily chosen by the h.m;un trainer, Moran looks for building blocks that have been used to describe a number of different root verbs. This search for building blocks crosses CMS boundaries and occurs only when memory is rich enough for some global decisions to be made.Moran was presented with twenty senses of four root verbs: move, throw, carry and buy. Moran chose the following effects as building blocks: Since Moran has only been presented with a small number of verbs of movement, it is not surprising that the building blocks it chooses describe Agents and Objects moving about the environmen= and their interaction with each other. A possible criticism is that the chosen building blocks are artifacts of the particular descrlptions that were given to Moran. We feel this is an advantage rather than a drawback, since Moran must assume that the world is described to it on a level that will be appropriate for subsequent processing.i) Agent (h,In Schank's conceptual dependency scheme, verbs of movement are often described with PTRANS and PROPEL. ~t is interesting that some of the building blocks Moran inferred seem to be subparts of the structures of PTRANS and PROPEL. For example, the conceptual dependency for "X throw Z at Y" is:) Y | D X~--) PROPEL +.S-Z ( J ! (Xwhere X and Y are b,,m"ns and Z is a physical object. see the object, Z, changing from the location of X to that of Y. Thus, the conceptual dependency subpart:We ) <o z <D J appears to be approximated by building block ~3 where the Object changes location. Moran would recoEnize that the location change is from the location of the Agent to the location of the indirect object by the interaction of building block #3 with other buildlng blocks and effects that participate in the action description.Similarly, the conceptual dependency for "X move Z to W" is :z<~)ioc(w)where X and Z have the same restrictions as above and W is a location. Again we see an object changing location; a co,~-on occuzence in movement and a building block Moran identified.
concluding remarks:
We are currently modifying Moran so that the identified building blocks are used to process subsequent input. That is, as new situations are encountered, Moran will try to describe them as much as possible in terms of the building blocks. It will be interesting to see how these descriptions differ from the ones Moran would have constructed if the building blocks had not been available. We shall also investigate how the existence of the building blocks affects processing time. As a cognitive model, inferred primitives may account for the effects of "bad teaching," that is, an unfortunate sequence of examples of a new concept. If examples are so disparate that few building blocks exist, or so unrepresentative that the derived building blocks are useless for future inputs, then the after-the-fact primitives will impede efficient representation. The knowledge organization will not tie together what we have experienced in the past or predict that we will experience in the future. Although the learning program could infer more useful building blocks at a later timeg that process is expensive, time-consuming and may be unable to replace information lost because of poor building blocks chosen earlier. In general, however, we must assume that our world is described at a level appropriate to how we must process it. If that is the case, then inferring a set of primitives is an advantageous strateEy.
:
A crucial decision in the design of a knowledge representation is whether to base it on primitives. A primitive-based scheme postulates a pre-defined set of meaning structures, combination rules and procedures. The primitives may combine according to the rules into more complex representational structures, the procedures interpret what those structures mean. A primltive-free scheme, on the other hand, does not build complex structures from standard building blocks; instead, information is gathered from any available source, such as input and information in previously built meaning structures.A hybrid approach postulates a small set of pro-defined meaning units that may be used if applicable and convenient, but is not limited to those units. Such a representation scheme is not truly prlmitive-based since the word "primitive" implies a complete set of pre-deflned meaning units that are the onl 7 ones available for construction. However, we will call this hybrid approach a primitive-based scheme, since it does postulate some pro-defined meaning units that are used in the same manner as primitives.
Appendix:
| null | null | null | null | {
"paperhash": [
"salveter|inferring_conceptual_graphs"
],
"title": [
"Inferring Conceptual Graphs"
],
"abstract": [
"This paper investigates the mechanisms a program may use to learn conceptual structures that represent natural language meaning. A computer program named Moran is described that infers conceptual structures from pictorial input data. Moran is presented with “snapshots” of an environment and an English sentence describing the action that takes place between the snapshots. The learning task is to associate each root verb with a conceptual structure that represents the types of objects that participate in the action and the changes the objects undergo during the action. Four learning mechanisms are shown to be adequate to accomplish this learning task. The learning mechanisms are described along with the conditions under which each is invoked and the effect each has on existing memory structures. The conceptual structure Moran inferred for one root verb is shown."
],
"authors": [
{
"name": [
"Sharon C. Salveter"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
}
]
}
],
"arxiv_id": [
null
],
"s2_corpus_id": [
"5202078"
],
"intents": [
[
"background"
]
],
"isInfluential": [
false
]
} | null | 536 | 0.007463 | null | null | null | null | null | null | null | null |
7ec87b43501e406b18127fb897659450aa4b2b15 | 11007680 | null | Flexible Parsing | When people use natural language in natural settings, they often use it ungrammatically, rnisSing out or repeating words, breaking-oil and restarting, speaking in Iragments, etc.. Their human listeners are usually able to cope with these deviations with little difficulty. If a computer system wislles tc accept natural language input from its users on a routine basis, it must display a similar indifference. In this paper, we outline a set of parsing flexibiiilies that :',uch a system should provide. We go, on to describe FlexP. a bottom-up pattern-matching parser that we have designed and implemented to provide these flexibilities for restricted natural lanai.age input to a limited-domain computer system. | {
"name": [
"Hayes, Phil and",
"Mouradian, Geroge"
],
"affiliation": [
null,
null
]
} | null | null | 18th Annual Meeting of the Association for Computational Linguistics | 1980-06-01 | 20 | 89 | null | When people use natural language in natural conversation, they often do not respect grammatical niceties. Instead of speaking sequences of grammatically well-formed and complete sentences, people Often miss out or repeal words or phrases, break off what they are saying and rephrase or replace it, speak in fragmentS, or use otherwise incorrect grammar. The Iollowing example colwersation involves a number of these grammatical deviations:A: I wmlt., can you send a memo a message to to Smith El: Is Ihal John or John Smith or Jim Smith A: Jim Instead of being unable or refusing to parse such ungrammaticality, human listeners are generally unperturbed by it. Neither participant in the above example, for instance, would have any di|ficulty in Iollowing the conversation.If computers are ever to converse naturally with humans, they must be al)l~, to l)nr.~t~ th~4ir inl)Id :.is ilexii~iy and rni)Izslly ;m htlnmns do. While considerable advances have been made in recent years in applied natural language processing, few el the systems thai have bean constructed have paKI 5uificien, uttenlion In Iho kinrIs el devialio=l that will inevitably occur =u~ their ulq)ul if (f)ey are tlsed In ,' natural environment. In many cases, if the user's tat)tit (ions sol COlllefnl to tile sysh.~m's grammar, an in(iication of incomprnllermanl) followed by a rerluest to rephrase may be Ihe best he (:a=~ P~xt~¢~(:l W(; ht~.liP.vt. • Ihat .~uch ,fllexibili!y i. parsing severely limits Ihe practicality O| natLiral language contpuler hderl:~rces, an(| is a major roasell why nalar~d language tlaa yet to find wide acceptance in sucl~ ;tpplications as database retrieval Or interactive carom{rod langut,.ges.In this paper, we report on a flexible parser, called FlexP, suitable for use with a restricted natural language interlace to a limited-domain counputer system. W~. describe first the kinds of grammatical deviations we are trying Io deal with, then the basic design decisions for FlexP with juslificalion for them based on the kinds of problem to be solved, and finally more details of our parsing system with worked examples of its operation. These examples,and most of the others in tl~e paper, represent natural language input to an electronic mail system that we and others [1 I are constructing as part of our research on user interfaces. This system employs FlexP to purse ils input.There are a number of distinct types of grammatical deviation and not ;ill lypt~; ;|r~ tl)tOll~l it1 ;Ill Iypes of COlnlnunicatJon siltiation. In tllin so;cites. we first define the restricted type el communication situation that we will be concerned will1, thai of a limile~-I-domain computer system and its user communicating via a keyboard and (hsplay screen. We then present a taxonomy of grammatical deviations common in this context, and by implication a set el parsing flexibilities needed to dealwith them.In the remainder of this paper, we will focus out a restricted type of canto)unitarian situation, that between a limited-domain system and its user, and on the p:trsing flexibilities neede(f by suuh a system Le ColJe with the user's inevitable grammatical deviations. Examples of the type of system we have in mind are data-b;~e retr0eval systems, electroa)ic mail systems, medical diaunosis systems, or any systems operating in a domain so rE'stricted thai they can COmpkHely understand ;311y relevant input a user might provide, In short, exactly the kind O! system that is normally used for work in applied natural Imtguage processing. There are several points to be made.First. although ,~uch systems can be expected to parse and understand anythi,lg relevant la their domain, their users cannot be expected to confine tllemselves to relevant input. As Bohrow el, al. 121 .ale. users oflcn explain Iltl~ir underlying motivations or olhorwzse jt=nlify their l(~(Itli'.%l,'~ ill l(~llnB ~Itlih~ ilr(!l~v;ilil Ill lh(!' (i()lnain ()fth(: ~yst~in. ]'hit ro,~tlJ| is lhal slJch systems cannot expecl Io parse ;.dl llx~il inlnH .,:vun wdh lhe use of flexible parsirx.j lechniqq..Secondly. a flexible parser is just purl of the conversational comporient of such ;,I system, ai'id cannot solve all parsi,g problems by itself, For example, il a parser can extract two coherent fragments train an otherwise incomprellensible input, the decisions about what Ihe system should next must be made by another component of the system. A decision on wllether to jump to a conclusion about wllat the user intended, to present him with a set of alternative interpretations, or to profess total confusion, can only be made with information about the Itistory of the conversation, beliefs about the user's goals, and measures of plausibility for any given action by the user. See [7~ for more discusSion o| Ihis broader view of graceful interaction in man-machine communication. Suffice it to say that we assume a flexible parser is iust one component of a larger system, and Ihal any incomprehensions or ambiguities that it finds are passed on to another component of the system with access to higtler-level information, putting it in a better Position to decide what to do next.Finally, we assume that, as usual for such systems, input is typed, rather than spoken as is normal in human conversations. This simplifies low.level processing tremendously because key-strokes unlike speech wave-farms are unambiguous.On the other hand, problems like misSpelling arise, and a flexible parser cannot assume thut segmentation into words by spaces :Slid carriage returns will always be corr~:t. However, such input is stilt one side of a conversation, rather than a polished text in the manner of most written material. As such, it is likely to contain many of the same type of errors normally found in spoken conversations.Misspelling is perhaps the most common form of grammatical deviation in written language. Accordingly. it is the form of ungrammaticality that has been dealt wdh the most by language processing systems. PARRY J t I J. I.II'E[1 Jl~ I. ;taxi tlumernus olher systems have tried te correct misspell i.p0Jt from their users.llhis n(,:' £a,mch w;l~ Sll~ll~.i~tl by IIH~. A. ll,ce OliVe uI SCI~IlliIic nl!s('lllc:h till(Jilt" An ability to correct spelling implies the existence of a dictionary of correctly spelled words An input word =tot fot.ld m the dictionary is assumed to be misspell and is compared against each of the dictionary words. If a dichonary word comes close enough to the input word according to some criteria of lexical matching, it is used in place of the input word.Spelhng correction nloy be attempted in or out ol COntext. For instance.there is only one regson~.lble correction for "relavegt" or Ior "seperate". l)td Ior all mlitlI like *'till" SOltle k.'~d at conlext is typlc;.dly ilecossory as m 'TII see yet= tm April" or "he w;.tS shot will} ltle stolen till." In ellect, c(}lltexl c;in Lx.. t !.lse(I to rc(ttlCO tile size Oi Ihe diClll)ltaly tO i}e searched for correct words. )'his lJt}lh n}akl,=s Ihe seuich inure t:|ficlent al}d red}ices tile possibilily el nlullll)le Ill.:ll(.;hus OI Ihe input ;.tgalllSt life LliCtiOI}afy. The LIFEF1 {UI sysletn uses tile strong cun:;tralnIs typically llrovlde~ by its SCII};.n}IIC gl;nnlnal if} IhlS way to r(.'~Iuc(3 tile range el possibilities Ior spelling correction.A particukvly troublesome kind of spelling error results in a valid word different from the one intended, as in "show me on of the messages". C|Parly. ~lich on error colt only t~e corre(;It~l Ihrotlgh cI)nlp;Irison against -'. contextually determined vocabulary.Even accomplished users Of a language will sometimes encounter words they do not know. Suci} situations are a test of their language learning skills. If one (lidn'l know tile word "fawn". one could nt least decide it was a cotour from "a fawn COlOUred sweater". There is. however, a very common special subclass of novel words that is well within the capabilities of present day systems: unknown proper names. Given an appropriate context, either sentential or discourse, it is relatively straightforward to parse unknown words into tile names of people, places, etc. Thus in "send copies to Moledes.ki Chiselov" it is reasonable to conclude Iron} the local context that "Moledeski" is a first name. "Chiselov" =s a suman~e, and together they identily a person (the intended roe:pit'.hi of the copm~5). Strnt~gles like this were used in the POLITICS [St. FRUMP 16J. and PARRY 11 I I systems.Since novel words are by definition not in the known voc=bulary, how can a parsing system distiogt,sh them from misspellings? In most cases. the novel words will not be close enough lo known words to allow SUCCeSSful correction, aS in the above oxamole, bul this is not illways true; an unknown first name of "AI" COUld easily be corrected to "all". Conversely, it is not s~te to assume that unkl}own words ill contexts which allow proper names are re;.}lly proper names as in: "send copies to al managers". In this example. "or" probably should be corrected to "all". In order to resolve such cas~. it may be necessary to clleck ;}gainst a list of referents lor proper nameR, if this is known, or otherwis(~ to consider such factors aR whelher tile inlli;ll letters of Iho words are capilalized.AS lar as we know. no systems yet constr,ctc<t have int~jroted their handling of mi.~spclt wortl.q iln(t unknown, proper nanl~"s Io Ihe degree oullined ;.Ifl¢)v~.,. However, It}t~ COOP 19l .~,y,,it{~ln allows sysllHllnlic access In a dat;.i llaSt. • (:Ulllailllll~j |)lOller ii;nnes wllhotll Ihe ni'~L~t Ii)l ilICitlSlOll of Ihe words ,1 Ihe system's ilnrsing vocabulary.Wntten text is segmented into words by spaces and new lines, and into higher level units by commas, periods and olher punctuation marks. Both classes, especially the second, may be omitted or inserted speciously. Spoken laf~gtJago s a so segmented, but by the Clt,te different markers of stress, interaction and noise words and phrases: we will not cons=der those further here. IncorreCt segmentation ;ll the lexical level results in two or more words being run togetl)er, as in "runtogether". or a single word being split up into two or more segments, ns in "tog ether" or (inconveniently) "to get her". or combinations of these effects as in "runlo geth el". In all cases, it seems natural to deal with such errors by extending the spelling correction mechanism to be able to recognize target words as initial se(jments of unknown words, and vice-versa. AS far as we know. no current systems deal with incorrect segmentation into words.The other type of segmenting error, incorrect punctuation, has a much broader impact on parsing methodology. Current parsers typ;catty work one sentence at a time. and assume that each sentence is terminated by an explicit end of sentence marker. A flexible parser must be able to deal with Ihe potenliai absence of such a marker, and recognize the sentence boundary regardless.It sllould also be able to make use of such punctuation if il is used correctly, and to ignore it if it is used incorrectly.Instead of punCtuation, many interactive systems use carriage-return to il~'Jicale sentence termination. Missing sentence terminators in this case correspond to two sentences on one line. or to the typing of a sentence without the terminating return, while specious terminators correspond tO typing a sentence on more than one line.In spoken language, it is very common to break off and restart all or part of an utterance: I want to --Could you lell me the name? Was tile man --er--tile ofliciol here yesterday?Usually. such restarts are sKjnall~l in some way. by "urn" or "er". or more explicitly by "lers back tip" or some si,,Ior phrase.In written language, such restarts do not normnlly occur because they are erase(l by lhe writer bolore the reatler sees Ihenl.interactive COmputer sysle--n~ typically prpvide facilitios for Iheir users tO delete the last cllorocler, word. or ctlrletlI hno as Ihotlgh ii had never been typed, for the very purpose of allowing such restalts. Given these signals, tl~e lustarIs aru ~Jasy Io (letecl anti inlerpr(;I. However. sonle|inlL'bs tIS(~rs I:lll to make use ol Ihese s=gnals. Sometimes. for instance, i~lptlt not containing a carriage-return can be spread over several lines by intermixing of input and output.A flexible parser should be able to make sense out. of "obvious" restarts that are not signalled, as in: delete the show me aU the messages from SmithNaturally occurmg language often involves utterances that are not complete sentences.Often the appropriateness of such fragmentary utterances depends oil conversational or physical context as in:A: Do you mean Jim Smith or Fred Smith? B: Jim A: Send a message to Smith B: OK A: with copies to Jones A flexible parser must be able to parse such fragments given the appropriate context.There is a question here of what such fragments should be parsed into. Parsing systems which have dealt with the problem have typically assumed tl it such inputs are ellipses of complete sentences, and that their parsing involves finding that complete sentence, and pursing it. Thus the sentence corresponding to "Jim" in the example above would be "I moon Jim". Essenhally this view has been taken by the LIFER [81 and GUS [2l systems. An alternative view =s that such fragments are not ellipses of more complete sentences, but are themselves complete utterances given tile context in which they occur, and sholdd be parsc<l as such. We have taken this view in our approach to flexihto parsing, as we will explain more fully below.Carbonoll (personal communication) suggests a third view appropriale for some fragments: that of an extended case frame, hi tile second examt.lle above, for instance. A's 'with copies fo Jones" forms a natural pint ul the c=ts~.' Irame est~.lblish~t fly "Self(| a message to .~;mith" Yet :molh~.,r approach to Ir~lgmnnt l)ar:;iflq is taken in the PLANES system ~ 12[ which always parses in terms el major fragments rather than Complete utterances. This technique relies on there I~ing only one way to combine Ihe fragments thus obtained, whicll may he a reasonable aSs|lnlptJon tar ill;.iny limited clara;rot systenls.Ellipses call ulna occur without regard Io context.A type Ihal inleract=ve .';yshtms are paHK:uhtrly likely 1o I:.lce is cryl)licness in which ;irhcles :tnd fdh(~r nOll-e~.~.%enlJ;iJ words are entitled ;is ill ":;how nleSS;.IgOS alter June 17" inste.;p.I ol the m¢lre complete ".,;how me all mesnacles dat(.~l after June 17" Again, tiler(: is a question of whether to consider Ihe cryptic tnl)LII cunlpluh~, which would me~fn inodJlying file system's urzmmmr, or whether to consider il ellil}tical, and cnmplele it by using Ilexlble techniques te parse if against the comply.re versioll as it exisls in Ihe standard gr;Inlnlar. Since conjunctions can support such a wide range of ellipsis, it is generally impractical to recognize such utterances by appropriate grammar exlensions. Efforts to deal with conhlnctJon have Iherefore depended on general mecllanisms which supplement the basic parsing strategy, as in fhe LUNAR system [fSl, or wilich modify the grammar temporarily, as ill the work el Kwasny and Sondheimer I IOI. We have not attempted 1o deal wilh tills type of ellipsis in our parsing system, and will not discuss further the type at flexibility it requires. It is retahvely straightforward for a system of limited comprehension to screen out and igfloro standard noise phrases such as "1 think" or "as lar as I can tell".More troublesome are interjections that cannel be recogni,~ed by the system, as might for instance be the case in where the unrecognized intefiections are bracketed. A flexible parser should be able to ignore such interjections. There is always tile chance that the unrecognizc~t part was an important part of what tile user was Iryillg In say, bl.fl clearly, the problems that arise from tills c;.tnllot be handlml by a parser.Omissions of words (or phrases) from the input are closely related to cryptic input aS discussed above, and one way of dealing with cryptic IflpLll in to treat il as a set of omi.~,~ions. However, Jn Cryptic input only iness~.*fdi~d ifdormaliOll is missed oul. while it is cooceivable thai one could also onlit essential ifllormation as ill:Display Ihe men,age June t 7Herr~ it is unclear whether tile Si)e[lker illeans a ines.,Ja(le dated ell ,hlne t f or b*:lore Juno 17 or ;liter June 17 (we assume that the system addfessc~t Calf di.~;t)lay lhilt~ts illlfn(.~lJately, or i1ol at all). If aft onlission can b~ i1;llrowl~(I (l()Wll ill IhJs w;ly, tile I);fr.°,l?r nllnldd he. • ;it)k. TM tO gE,itf'!r;llP :ill tile alfern~diven liar c¢lnh~xtual resohllinfl nf the ambiHllily or for the basis of a (lllesti(lll Io tile us¢.~r). If tile omis.'~inn can be narrowed down to one ;llh.~rn;llive fhell tile illl)tlt was flleloly CI yl)tic.Besides omitting words and phrases, people sometimes substitute incorrect or unintended ones. Often such substitutions are spelling errors and should be caught by Ihe spelling correction mechanism, but sometinles they are inadvertent substitutions or uses of equivalent vocabulary not known tO the system. This type of substitution is just like an omission except that there is an unrecognized word or phrase in the place where tile omitted input should have been. For instance, in "the message over June 17", "over" takes the place of "dated" or "sent after" or whatever elst: is appropriate at that point. If the substifution is of vocabulary which is appropriate but unknown to the syslem, parsing o| substihlted words can provide tl~e basis of vocabulary extension.It is not uncommon for people to fail to make the appropriate agreement between the various parts of a noun or verb phrase as in :I wants to send a messages to Jim Smith. ]'he appropriate action is to ignore the lack of agreement, and Weischedel and Black [13J describe a melhod for relaxing the predicates in an ATN which typically check for soch agreements. However, it is generally not possible to conclude locally which value of the marker (number or person) for whicll the clash occurs is actually intended. We considered examples in which the disagreement involves more than inflections (as in "tile message over Jr,he 17") in the section on substitutions.Idioms are phrases whose interpretation is not what would be obtained by parsing and interpreting them constructively in the normal way, They may also not adllere to the standard syntactic rules. Idioms must thus be parsed as a whole in a pattern matching kind of mode. Parsers based purely oil patlern matching, like thai el PARRY I I t J, titus are able to parse idioms naturally, while others must eifher add a preprocessing phrase of pattern matchimj as in tile LUNAR system [15~. or mix specific patterns in will1 more general rules, as in Ihe work of Kwnsny and Sondheimer [10] . Semantic grammars [3, 81 provide a relatively natural way of mixing idiomatic and more general patterns.In normal hunlall conversalif}fl, once SOme;Ihing is said, it is suid and c;.tllnOt be ch,lnul.~t, excl;pt indirectly by more words wlfich refer Uack to tile original ones.In inleractively typf.~l lie)at, there is alwayS the possit)ilily thai a user nlay notice ;.in error he has made ;.ind go back an(I correcl it hmf.~(:ll, wilhoul wading for the :wstem to ptlrslle =Is own, possibly slow and inef[e(:tive, motile(Is el correction. Wilh appropriate editing lacilities, Ihe user may do this wilhoul erasing inlervening words, alld, if |he system is processing his input oil a word by word basis, may | null | Most current parsing systems are unable to code with most of the kinds of grammatical deviation outlined above. This is because typical parsing systems attempt to apply their grammar to Illeir input in a rigid way, and since deviant input, by defimtion, does not conform to the grammar, they are unable to produce any kind of parse for it at all. Attempts to parse more flexibly have typically involved parsing strategies to be used after a tog-down parse using an ATN It4J or similar tran~lion net has failed. thus alter a word that the system has already processed. A flexible parser must be able to take advantage of such user provided corrections to unknown words, and to prefer them over its own corrections. It must also be DreDared to change its parse if the user changes a valid word to another different but equally valid word.We have constructed a parser, FlexP. which can apply its grammar tO its input flexibly, and thus deal wdh the grammatical deviations discussed in the previotls sechon We shotdd empllas~;~e, however, that FlexP is designed to be used in thu lltturluce to a restncted-domain system AG such. it is intended to work Irom a domuilt-sDecific semantic grammar. rather titan one st.tuble Ior broader classes of input. FlexP thus does not embody a solutloll for Ilexible parsing of natural language in general. In describing FlexP. we will note those of its techoiques that seem unlikely to scale up to use with more complex grammars with wider coverage.We have adopted in FlexP an approach to flexible parsing based not on ATN's. but closer to the pattern-matching purser OI tile PARRY system [11J. possibly tim most robust parser yet constructed. Our approacl~ is based on several design decisions:• bottom up rather than top-down por~ing: This aids io the • Parsing el fragmentary utterances, un(I in the r~rxll.li¢,l nf interjechonR alld restarts.• pattern matching: 1 Ilis is essential Inr idioms, and also aids in tile ilelection n! omissions and sobsMutions in non-i(limontic phrases.• parse suspension and conli,luoiion: Thu ;tt)ilily to F.uspelld it I);Irse and letter re.~Lin|e il.'; I)rocnRsilU,| i~ illtllortant for intorlections, restarts, and non-explicit terntinolions.In the remain(ler of this section we examine and juslify these design decisions in more detail.Our choice of a bottom-up strategy is based o, our need to rocu~jnize isolated sentence Iragments. If an utterance which would normally be considered only a fragment of a complete sentence is to be recognized top-down, there are lwo approaches to take. First. the grammar can be altered so that Ihe fragment is recognized as a complete ulteraoce in its own right. This is undesirable bee;ruse it can cause enormous exp;msion of the grmnmar, and because it becomes difficult to decide whether s fragmeot appears in isolali~ or as port OIa larger utterance, especiully if the possibility of missing end of sentence markers also exists. The second option is for the purser to infer from the convers;ttidnal context what grammatical sub-category (or sequence of sub-cate(jories) the fragment might fit Dnto. and thee to do a top-down parse tram that sub-category. This essentially is tile tzlctic used in the GUS [21 and LIFER lot systems. This strutegy =s clearly better than the first one. but has two Problems; first of predicting all no.ss~ble sub-categories which might come next. and secondly, of inefficiency if a large number are predicted. Kwosr.y and Sondheimer I10] use :. combination of the two strategies by temporarily modifying an ATN grammar to accept fragment categories as complete ulterances at the braes they are contextually predicted.Pattern-uP Doming avoids the problem of predicting what sub-categories may occur. If a fragment filling a given sub-category does occur, it is ~3rsed as such whatever the context. However. if n given input can be p.'~rsed as more thon one sub-category, the bottom-up approach would llave to produce them all. even if only one would be predicted top-down. In a syslem of limited comprehension, fragmentary recognition is sometunes necessary because not all of an input con be recognized, rather tilan because el intentional ellipsis. Here. it is probably in)possible to make pte(tictloos altCI bottom-up pursing is tile ()lily toothed that is likely to work.As described below, boltom-up stnltegms, coupled with suspended purses, are also helphrl in recognizing mteqections and restarts.We have chosen to use a granlnlar of linear I);lltorns rntller thao a ITuiiSlllOn network boc;.ttl..;e palterll-nl{llChlllg ineshus well wllll I)olJoln.up purSlllg, bec;.itise it f;.1ciIitutes reco~l|lllOiI (11 UIIuI;uIcuS wilh nllli.%sioIl.~ ;|llt| SUbStitutiOnS. ;|ll(i [~3cause it is I~eces.~.ury ;.lllyw;ly l~Jr tile lecogndion oi i(tidm;itiC phrases. TIIu (.}r31lllil;.t; oJ the parser is ;.= SOt of rewrde or I)roduCtlOIt rlllt~$ whose tell h;.u)(I :role is ;.t til)(.l[il II;.l|tL=fn Of COil:;llttlHIttS (ll;XlL;;.ll ()1 hl(Ih(}l k:vel) ;tltll wllose right hand side derides a result constWJi}ot. Elenleots el the pattern may be labelled opholsal or allow for repeated matches, We make the assumption, certainly true Ior the grammar we are presently working with. that the grammar will be semantic rather than synt{tctic, with patterns corresponding tO idiemntic phrases or to object and event descriph~,ls meonulgful it) some hmitod domain, rather than to general syntactic structures.Linear patterns fit well with bottom-up parsing because they can De indexed by any of their components, and because, once indexed, it is straiglltforward to confirm wl)ether a pattern matches input already processed in a way consistent with the way II~e pattern was indexed.Patterns help with rite detection of omissions and substitutions because in either case the relevant pattern can still be indexed by the remaining elements that appear correctly in the input, and thus the pattern as a whole can be recognized even if some of its elements are missing or incorrect. In the case of substitutions, such o technique cnn actually help locus the st~011ing correction, proper name reco(jnition, or vocabulary learning techniques, whichever is appropriate, by tsolahng the substituted input and the pattern constituent which it should have matched. In effect. this allows the normally bottom-up parsing strategy to go top-down to resolve such substitutions.In normal left to right processing, it is not necessary to activate all the patterns io(lexed by every new word as it is COnSidered. If a new word is accounted lot by a pattern that has already been partKflly matclled by previous input, it is likely that no other patterns need to be indexed and mulched Io~" thai input, ll)ts heuristic Plows FlexP's pursing algorithm to limit the number of patterns it toes to ntatch. We should emphasize. however, that it is a I'.ettr|stic. and while it has caused us no trouble with the limited*domino grammar we have been using, it is unclear how well it would transfer to a more complex grmnmar. FlexP's algorithm does. however, carry along ntultii)le partial par.."~es in other alliblguOUS cases. removing tile need for any backtracking.FlexP employs the technique of suspending a Parse with the possibility el later cominualion to help with the recognition of inlerlecliofls, restartS. and implK, il termlnatio,s. Tile I}arsmg algurittun works tell to right in a t}re:tdlh-lir.qt retainer. It ntainlui=is a set of p;Irtiu! parses, each el which ~tccotlnts for Ihe input ulre~lty proces.=~..(t but riot yet accot.llod lot by .' 1 COmpleted pari.;e. The purser attempts to incorporate o~tch new input into each of Ihu P;trtial p~.~rsOs. I{ Ihis is successful, the t)artiul parses are exleniled al~l lil:ly irlcreos~ or decrease ill ittinlber. If no partial purse can be extendo~t, the entire set is ~.lVed as a SUspended parse, There are several possible explanations for input mismatch. Le. the failure o! tile nex! input tO extend a parse.• The input could be an implicit terminal=on, i.e. the start of a new top-level utterance, and the previous utterance should be assumed complete.• t he: Inp¢ll ¢util~.i b~J a reslart, m whlcll case li.e active Parse should be abandoned and a new parse starte(I Item that point.• The input could be the start of an interjection, io which case lhe actwe parse should be temporarily suspended, and a new mtrse started for the intorlection.It is not possible, in general, tO dL~tmguish between these cases at the time tim mismatch occurs. II the active parse is not at a possible termination Point. then input mismatch cannot indicate implicit termioation, but may indicate either restart or interjection. It is necessary to suspend the active parse and wuit to see if it is continued at the next input mismotclt. On the other hand. if the active parse is at a possible termination point, input mismutch does not rule out interjection or even restart. In this situation, our algorithm tentatively ussumes that there has been an implicit termination, but suspends the active parse anyway for subsequent potential continuation.Note also that tl~e possibility el implicit termination provides justification for the strategy of interpreting each input immediately it is received. If the input signals an implicit termination, then the user =nay well expect the system to respond immediately to the input thus terminated.This section describes how FlexP achieves the Sex=bit=ties discussed earlier, The implementation described is being used as the parser for an intelli(jent interface Io ;i multi-mediu message system [ 1 ] , The intelligence in this interface is cnncentrated in u tl.ser A(lent whictl =ned=sites between the user and the underlying tool System. The Agent ensures that the interaction goes smootlfly by, amoog other things, checking Ihat tile user specifies the operations he wants performed and their parameters correctly and uuumbiguously, conducting a dialogue wilh the user if prohlems arise. Th(: role el FlexP" us tile Agent's parser is to transform the user's input into the internal ropresenlutions employed by tile Agent. Us.idly this inl)ut is a re(Itlest for aclio, hy the to(ll or a description of obiects known to the tool. Our exzmq=les are drawn from that context.Interpretation begins as soon as any input is available, The first word is used us an index into the store of rewrite rides. Each rule gives a pattern and u structure to be pr=xlu(:od when lira pattern is matcherf. The components el the structure ure built from the structures or words which match the elements of the pattern. The word "display" indexes the rule: We call the partially-instantiated pattern which labels the zipper node a hWJothesis. It represents a possible interpretation lot a segment of input.The next word "new" does not directly match the hypothesis, but since "new" is a MsgAdj (an adjective which can modify a description of a message), il indexes the rule: Here. "?" means optional, and ..... means repeatable. For the sake Of clarity, we have omitted other prefixes which distinguish between terminal and non-terminul pattern elements. Tile result of this rule fits the current hypothesis, so extends the purse as follows: The third input m;.dcho.,; Ihe C;.It(~tlory M:;gl-lead (head noun el a met.sage (lest:Silltion) and so lits tile current hypothesis, This match lills the lust non-oplional slot in Ihut pattern. By doing so it makes tile current hypothesis and its parent pattern potemia/ly complete. When the parser finds a potentially complete phrase whose result is of interest to the Agent (and the parent phrase in this example is in that category), the result is constructed and sent. However. since the p;irs~,r has not seen a lomlination signal, this purse is kepl u(.,hvu. Ihu iiq)ut 5,;us su lur may be only a prefix Ior some longer utterance such as "display new messages about ADA". In this case "ubout ADA'" would be recognized as a match for MsgCase (a prepositional phrase that can be part of a message description), the purse would be extended, and a revision of the previous slructul'e sent to the Agent. | When an input word cannot be found in the dictionary, spelling correction is attempted in a background process which runs at lower priority than the parser, 1"he input word and a list at possibilities derived front the current hypothesis are passed as arguments. In some cases the spelling correcter produces several likely alternatives. The parser handles such alnhiguous words using the same mecllanisms which ucconlmotlate phrases with ambiguous interpretations ]'hut is. ulternative interpretations are curried altJng until Ihere is enough input to discriminate those which are pla.sible from those which are not. | lie d~.,tails ira: given in the n~:xt section.The user inuy also corrl:ct Ihe input taxi himself, These changes are hundle~l in ilnlch the S;llno way as those proposed by Ihe spellillg correcter. Of course, thes~ u'.~.r-suppliot ch;ingos ure given priority, and Ililrs=..'s built u~.allg Ihe formal ver'.;iun musI lxJ mlv.lili.~l or discarded.Spellimj correction is run as a separate, lower priority process because it reusonublo parse may be produced even without a proper interpretation for the unknown word. Since spelling correction can involve rather time-consuming searches, this work is best done when the parser has.no better alternatives to explore.In the first example there was only one I~ypothesis about the structure Of the input. More generally, there may be several hypotheses which I)rovide competing interprelutions uboul what has already been seen and whal will appear =text. Until these p~lrtial parses are Iound to be inconsistent with the actual input, they are carried along as part of the ~zctive purse. Therefore the active parse is a set at partial purse trees each efficiency required for real-time response, but could conceivably fail to find appropriate parses. We have not encountered such circumstances wilh tile s=nall domain-specitic semantic grammar we have been using.rl+e oaly Ilexibiltty described so lar *s that allowed by the optional elements el patterns, II om~ssions can be anttcipLIte(I, allowances trlay be built Ilil(= the grammar. In Ihi$ sechon we show how other OlnissiOI1S may h~ lUllittl(;~t ;tnlt Olhee Ilexitiililles achit=ved by ~j|low,ncj ;t(J('liliontil freHtlom in the wtw an item is allowed tO matcI1 a pattern. Ihere are two ways in with a top-level Ilypothesis about the overall structure at the input so far anti a curr~nt hyl)othesis concerning the next input.The actual mlplementation allows sharln(j of COnln)OII structure alnOllg competing hypotheses and so =S more ollic=ent than this descnption suggests. AS ~ tjeltur[tl str:.ltegy, we carry seVel :.11 linssitile inlerl)retallOltS only as kintj ;I.~ thert! is 11o clear lit;st ;.lllernalive. II1 l):.lrlictllar r'~o fh~xible parsing| t*.,chniqueS are us~t to suttl)ort parses Ior which th,.=re are pl-'tuszblo ;alternatives tmt|or normal imrsing.This heuristic helps achieve 11)0 wlllch the malching crilerla may be relaxed, namely• relax consistency constraints, e.g. number agreement• allow out Of order matches Consff;lency constraints are predicates which are attached to rules. They assert relationships which must hold among the items which till the pattern Fhese constraints allow contexl-sensilive constructions in the gramnmr. Such predicates are commonly used for simdar purposes by ATN parsers 1!41 and the flexibility achieved by relaxmg these constraints has been explored belore 113J. The tochmque fits smoothly into FlexP but has no1 ;icttJally been needed or used in our current application. First. previously skioPed elements are compared to the input. In this example, the element ?Pet is considered but does not match. Next, elements to the right of the eligible elements are considered. Thus MsgCase is considered even though the non-optional element MsgHead has not been matched. This succeeds and allows the partial parse to be extended to Unreeocjnizable substitutions are also handled by this mechanism. In the pll ra.se display the new stuff aboul ADA the word "stuff" iS not found in the dictionary so spelling correction is tned but does not produce any plausible alternatives. While spelling correction =s underway, the remaining spurs can be parsed by siml~y omlthng "stuff" and using the flexible matching proce<hJre. Tr;.lnspo31llOlIS :.ire handlEKI Ihrough one applic-'~llofl el Ilexible matching if Iho elemenl of the IransposL'<l pair is option~d, two applic;.tlions if not.h'lteri~.~:;tions are inore colnll~on in spoken than in wl ;ell language but do at:cur if= lyp(~t input sglnOltlnes. To deal wdh such ,1put, out design allows lot blocked patios tO be suspended rtllher than merely discarded.Users. especially novices. =nay embellish their inpul will1 words and phrases that do r',~t provide essential information and cannot be specifically anl,clpalet+ Consider t.vo examines: display please massages dated June 17 disl~ay Ior me messages dated June 17In the first case. the ml~.rjected word "please" could be recognized as a r:.mnmon noise phrase wI.ch means nothing to the Agent except possibly to suggust that the user is a nowce. The second example is more difficult. Both words of the interjected phrase can appear in a num0er of legitimate and me~lnu'lghJI constru+;h(.a.'~: they cannot be ignored so easily.For the latter example, parse suspension works us follows. After the first word, the active parse contains a single partial parse: The next word confirms the first of these, hut the fourth word "messages" does not. When the Darser finds that it cannot extend the active parse, it considers the suspended parse. Since "messages" fits, the active and suspended parses are exchanged anti the remainder of the input processed normally, so that the parser recognizes "display messages dated June 17" as if it had never contained "for me". | When peDDle use language naturally, they make mistakes and employ economies of expression that. allen result in language which is ungrammalical by strict standards. In particular, such grammatical deviations will inp.vilabty occur in the inpul of a computer syslem which allows its user Io elnploy nalural langua¢.le. Such a computer system must, Ihert~.ior¢:, I}o p,t~l);Lrt~H to I)arsH its input nexibly, if il is avoid Irt=slration for its user.ht this paper, we have attemple'(J Io outline the main kinds of flexibility a nc'ttural I;.tnguage parsur intended for ~att=ral use sltouk| provide. We also describod a bottom-up pattern-matching parser, FloxP, which exhibits these Iloxibilities, and wllicl~ is suitable for restricted natural language input to a limited-domain system. | Main paper:
the importance of flexible parsing:
When people use natural language in natural conversation, they often do not respect grammatical niceties. Instead of speaking sequences of grammatically well-formed and complete sentences, people Often miss out or repeal words or phrases, break off what they are saying and rephrase or replace it, speak in fragmentS, or use otherwise incorrect grammar. The Iollowing example colwersation involves a number of these grammatical deviations:A: I wmlt., can you send a memo a message to to Smith El: Is Ihal John or John Smith or Jim Smith A: Jim Instead of being unable or refusing to parse such ungrammaticality, human listeners are generally unperturbed by it. Neither participant in the above example, for instance, would have any di|ficulty in Iollowing the conversation.If computers are ever to converse naturally with humans, they must be al)l~, to l)nr.~t~ th~4ir inl)Id :.is ilexii~iy and rni)Izslly ;m htlnmns do. While considerable advances have been made in recent years in applied natural language processing, few el the systems thai have bean constructed have paKI 5uificien, uttenlion In Iho kinrIs el devialio=l that will inevitably occur =u~ their ulq)ul if (f)ey are tlsed In ,' natural environment. In many cases, if the user's tat)tit (ions sol COlllefnl to tile sysh.~m's grammar, an in(iication of incomprnllermanl) followed by a rerluest to rephrase may be Ihe best he (:a=~ P~xt~¢~(:l W(; ht~.liP.vt. • Ihat .~uch ,fllexibili!y i. parsing severely limits Ihe practicality O| natLiral language contpuler hderl:~rces, an(| is a major roasell why nalar~d language tlaa yet to find wide acceptance in sucl~ ;tpplications as database retrieval Or interactive carom{rod langut,.ges.In this paper, we report on a flexible parser, called FlexP, suitable for use with a restricted natural language interlace to a limited-domain counputer system. W~. describe first the kinds of grammatical deviations we are trying Io deal with, then the basic design decisions for FlexP with juslificalion for them based on the kinds of problem to be solved, and finally more details of our parsing system with worked examples of its operation. These examples,and most of the others in tl~e paper, represent natural language input to an electronic mail system that we and others [1 I are constructing as part of our research on user interfaces. This system employs FlexP to purse ils input.
types of grammatical deviation:
There are a number of distinct types of grammatical deviation and not ;ill lypt~; ;|r~ tl)tOll~l it1 ;Ill Iypes of COlnlnunicatJon siltiation. In tllin so;cites. we first define the restricted type el communication situation that we will be concerned will1, thai of a limile~-I-domain computer system and its user communicating via a keyboard and (hsplay screen. We then present a taxonomy of grammatical deviations common in this context, and by implication a set el parsing flexibilities needed to dealwith them.In the remainder of this paper, we will focus out a restricted type of canto)unitarian situation, that between a limited-domain system and its user, and on the p:trsing flexibilities neede(f by suuh a system Le ColJe with the user's inevitable grammatical deviations. Examples of the type of system we have in mind are data-b;~e retr0eval systems, electroa)ic mail systems, medical diaunosis systems, or any systems operating in a domain so rE'stricted thai they can COmpkHely understand ;311y relevant input a user might provide, In short, exactly the kind O! system that is normally used for work in applied natural Imtguage processing. There are several points to be made.First. although ,~uch systems can be expected to parse and understand anythi,lg relevant la their domain, their users cannot be expected to confine tllemselves to relevant input. As Bohrow el, al. 121 .ale. users oflcn explain Iltl~ir underlying motivations or olhorwzse jt=nlify their l(~(Itli'.%l,'~ ill l(~llnB ~Itlih~ ilr(!l~v;ilil Ill lh(!' (i()lnain ()fth(: ~yst~in. ]'hit ro,~tlJ| is lhal slJch systems cannot expecl Io parse ;.dl llx~il inlnH .,:vun wdh lhe use of flexible parsirx.j lechniqq..Secondly. a flexible parser is just purl of the conversational comporient of such ;,I system, ai'id cannot solve all parsi,g problems by itself, For example, il a parser can extract two coherent fragments train an otherwise incomprellensible input, the decisions about what Ihe system should next must be made by another component of the system. A decision on wllether to jump to a conclusion about wllat the user intended, to present him with a set of alternative interpretations, or to profess total confusion, can only be made with information about the Itistory of the conversation, beliefs about the user's goals, and measures of plausibility for any given action by the user. See [7~ for more discusSion o| Ihis broader view of graceful interaction in man-machine communication. Suffice it to say that we assume a flexible parser is iust one component of a larger system, and Ihal any incomprehensions or ambiguities that it finds are passed on to another component of the system with access to higtler-level information, putting it in a better Position to decide what to do next.Finally, we assume that, as usual for such systems, input is typed, rather than spoken as is normal in human conversations. This simplifies low.level processing tremendously because key-strokes unlike speech wave-farms are unambiguous.On the other hand, problems like misSpelling arise, and a flexible parser cannot assume thut segmentation into words by spaces :Slid carriage returns will always be corr~:t. However, such input is stilt one side of a conversation, rather than a polished text in the manner of most written material. As such, it is likely to contain many of the same type of errors normally found in spoken conversations.Misspelling is perhaps the most common form of grammatical deviation in written language. Accordingly. it is the form of ungrammaticality that has been dealt wdh the most by language processing systems. PARRY J t I J. I.II'E[1 Jl~ I. ;taxi tlumernus olher systems have tried te correct misspell i.p0Jt from their users.llhis n(,:' £a,mch w;l~ Sll~ll~.i~tl by IIH~. A. ll,ce OliVe uI SCI~IlliIic nl!s('lllc:h till(Jilt" An ability to correct spelling implies the existence of a dictionary of correctly spelled words An input word =tot fot.ld m the dictionary is assumed to be misspell and is compared against each of the dictionary words. If a dichonary word comes close enough to the input word according to some criteria of lexical matching, it is used in place of the input word.Spelhng correction nloy be attempted in or out ol COntext. For instance.there is only one regson~.lble correction for "relavegt" or Ior "seperate". l)td Ior all mlitlI like *'till" SOltle k.'~d at conlext is typlc;.dly ilecossory as m 'TII see yet= tm April" or "he w;.tS shot will} ltle stolen till." In ellect, c(}lltexl c;in Lx.. t !.lse(I to rc(ttlCO tile size Oi Ihe diClll)ltaly tO i}e searched for correct words. )'his lJt}lh n}akl,=s Ihe seuich inure t:|ficlent al}d red}ices tile possibilily el nlullll)le Ill.:ll(.;hus OI Ihe input ;.tgalllSt life LliCtiOI}afy. The LIFEF1 {UI sysletn uses tile strong cun:;tralnIs typically llrovlde~ by its SCII};.n}IIC gl;nnlnal if} IhlS way to r(.'~Iuc(3 tile range el possibilities Ior spelling correction.A particukvly troublesome kind of spelling error results in a valid word different from the one intended, as in "show me on of the messages". C|Parly. ~lich on error colt only t~e corre(;It~l Ihrotlgh cI)nlp;Irison against -'. contextually determined vocabulary.Even accomplished users Of a language will sometimes encounter words they do not know. Suci} situations are a test of their language learning skills. If one (lidn'l know tile word "fawn". one could nt least decide it was a cotour from "a fawn COlOUred sweater". There is. however, a very common special subclass of novel words that is well within the capabilities of present day systems: unknown proper names. Given an appropriate context, either sentential or discourse, it is relatively straightforward to parse unknown words into tile names of people, places, etc. Thus in "send copies to Moledes.ki Chiselov" it is reasonable to conclude Iron} the local context that "Moledeski" is a first name. "Chiselov" =s a suman~e, and together they identily a person (the intended roe:pit'.hi of the copm~5). Strnt~gles like this were used in the POLITICS [St. FRUMP 16J. and PARRY 11 I I systems.Since novel words are by definition not in the known voc=bulary, how can a parsing system distiogt,sh them from misspellings? In most cases. the novel words will not be close enough lo known words to allow SUCCeSSful correction, aS in the above oxamole, bul this is not illways true; an unknown first name of "AI" COUld easily be corrected to "all". Conversely, it is not s~te to assume that unkl}own words ill contexts which allow proper names are re;.}lly proper names as in: "send copies to al managers". In this example. "or" probably should be corrected to "all". In order to resolve such cas~. it may be necessary to clleck ;}gainst a list of referents lor proper nameR, if this is known, or otherwis(~ to consider such factors aR whelher tile inlli;ll letters of Iho words are capilalized.AS lar as we know. no systems yet constr,ctc<t have int~jroted their handling of mi.~spclt wortl.q iln(t unknown, proper nanl~"s Io Ihe degree oullined ;.Ifl¢)v~.,. However, It}t~ COOP 19l .~,y,,it{~ln allows sysllHllnlic access In a dat;.i llaSt. • (:Ulllailllll~j |)lOller ii;nnes wllhotll Ihe ni'~L~t Ii)l ilICitlSlOll of Ihe words ,1 Ihe system's ilnrsing vocabulary.Wntten text is segmented into words by spaces and new lines, and into higher level units by commas, periods and olher punctuation marks. Both classes, especially the second, may be omitted or inserted speciously. Spoken laf~gtJago s a so segmented, but by the Clt,te different markers of stress, interaction and noise words and phrases: we will not cons=der those further here. IncorreCt segmentation ;ll the lexical level results in two or more words being run togetl)er, as in "runtogether". or a single word being split up into two or more segments, ns in "tog ether" or (inconveniently) "to get her". or combinations of these effects as in "runlo geth el". In all cases, it seems natural to deal with such errors by extending the spelling correction mechanism to be able to recognize target words as initial se(jments of unknown words, and vice-versa. AS far as we know. no current systems deal with incorrect segmentation into words.The other type of segmenting error, incorrect punctuation, has a much broader impact on parsing methodology. Current parsers typ;catty work one sentence at a time. and assume that each sentence is terminated by an explicit end of sentence marker. A flexible parser must be able to deal with Ihe potenliai absence of such a marker, and recognize the sentence boundary regardless.It sllould also be able to make use of such punctuation if il is used correctly, and to ignore it if it is used incorrectly.Instead of punCtuation, many interactive systems use carriage-return to il~'Jicale sentence termination. Missing sentence terminators in this case correspond to two sentences on one line. or to the typing of a sentence without the terminating return, while specious terminators correspond tO typing a sentence on more than one line.In spoken language, it is very common to break off and restart all or part of an utterance: I want to --Could you lell me the name? Was tile man --er--tile ofliciol here yesterday?Usually. such restarts are sKjnall~l in some way. by "urn" or "er". or more explicitly by "lers back tip" or some si,,Ior phrase.In written language, such restarts do not normnlly occur because they are erase(l by lhe writer bolore the reatler sees Ihenl.interactive COmputer sysle--n~ typically prpvide facilitios for Iheir users tO delete the last cllorocler, word. or ctlrletlI hno as Ihotlgh ii had never been typed, for the very purpose of allowing such restalts. Given these signals, tl~e lustarIs aru ~Jasy Io (letecl anti inlerpr(;I. However. sonle|inlL'bs tIS(~rs I:lll to make use ol Ihese s=gnals. Sometimes. for instance, i~lptlt not containing a carriage-return can be spread over several lines by intermixing of input and output.A flexible parser should be able to make sense out. of "obvious" restarts that are not signalled, as in: delete the show me aU the messages from SmithNaturally occurmg language often involves utterances that are not complete sentences.Often the appropriateness of such fragmentary utterances depends oil conversational or physical context as in:A: Do you mean Jim Smith or Fred Smith? B: Jim A: Send a message to Smith B: OK A: with copies to Jones A flexible parser must be able to parse such fragments given the appropriate context.There is a question here of what such fragments should be parsed into. Parsing systems which have dealt with the problem have typically assumed tl it such inputs are ellipses of complete sentences, and that their parsing involves finding that complete sentence, and pursing it. Thus the sentence corresponding to "Jim" in the example above would be "I moon Jim". Essenhally this view has been taken by the LIFER [81 and GUS [2l systems. An alternative view =s that such fragments are not ellipses of more complete sentences, but are themselves complete utterances given tile context in which they occur, and sholdd be parsc<l as such. We have taken this view in our approach to flexihto parsing, as we will explain more fully below.Carbonoll (personal communication) suggests a third view appropriale for some fragments: that of an extended case frame, hi tile second examt.lle above, for instance. A's 'with copies fo Jones" forms a natural pint ul the c=ts~.' Irame est~.lblish~t fly "Self(| a message to .~;mith" Yet :molh~.,r approach to Ir~lgmnnt l)ar:;iflq is taken in the PLANES system ~ 12[ which always parses in terms el major fragments rather than Complete utterances. This technique relies on there I~ing only one way to combine Ihe fragments thus obtained, whicll may he a reasonable aSs|lnlptJon tar ill;.iny limited clara;rot systenls.Ellipses call ulna occur without regard Io context.A type Ihal inleract=ve .';yshtms are paHK:uhtrly likely 1o I:.lce is cryl)licness in which ;irhcles :tnd fdh(~r nOll-e~.~.%enlJ;iJ words are entitled ;is ill ":;how nleSS;.IgOS alter June 17" inste.;p.I ol the m¢lre complete ".,;how me all mesnacles dat(.~l after June 17" Again, tiler(: is a question of whether to consider Ihe cryptic tnl)LII cunlpluh~, which would me~fn inodJlying file system's urzmmmr, or whether to consider il ellil}tical, and cnmplele it by using Ilexlble techniques te parse if against the comply.re versioll as it exisls in Ihe standard gr;Inlnlar. Since conjunctions can support such a wide range of ellipsis, it is generally impractical to recognize such utterances by appropriate grammar exlensions. Efforts to deal with conhlnctJon have Iherefore depended on general mecllanisms which supplement the basic parsing strategy, as in fhe LUNAR system [fSl, or wilich modify the grammar temporarily, as ill the work el Kwasny and Sondheimer I IOI. We have not attempted 1o deal wilh tills type of ellipsis in our parsing system, and will not discuss further the type at flexibility it requires. It is retahvely straightforward for a system of limited comprehension to screen out and igfloro standard noise phrases such as "1 think" or "as lar as I can tell".More troublesome are interjections that cannel be recogni,~ed by the system, as might for instance be the case in where the unrecognized intefiections are bracketed. A flexible parser should be able to ignore such interjections. There is always tile chance that the unrecognizc~t part was an important part of what tile user was Iryillg In say, bl.fl clearly, the problems that arise from tills c;.tnllot be handlml by a parser.Omissions of words (or phrases) from the input are closely related to cryptic input aS discussed above, and one way of dealing with cryptic IflpLll in to treat il as a set of omi.~,~ions. However, Jn Cryptic input only iness~.*fdi~d ifdormaliOll is missed oul. while it is cooceivable thai one could also onlit essential ifllormation as ill:Display Ihe men,age June t 7Herr~ it is unclear whether tile Si)e[lker illeans a ines.,Ja(le dated ell ,hlne t f or b*:lore Juno 17 or ;liter June 17 (we assume that the system addfessc~t Calf di.~;t)lay lhilt~ts illlfn(.~lJately, or i1ol at all). If aft onlission can b~ i1;llrowl~(I (l()Wll ill IhJs w;ly, tile I);fr.°,l?r nllnldd he. • ;it)k. TM tO gE,itf'!r;llP :ill tile alfern~diven liar c¢lnh~xtual resohllinfl nf the ambiHllily or for the basis of a (lllesti(lll Io tile us¢.~r). If tile omis.'~inn can be narrowed down to one ;llh.~rn;llive fhell tile illl)tlt was flleloly CI yl)tic.Besides omitting words and phrases, people sometimes substitute incorrect or unintended ones. Often such substitutions are spelling errors and should be caught by Ihe spelling correction mechanism, but sometinles they are inadvertent substitutions or uses of equivalent vocabulary not known tO the system. This type of substitution is just like an omission except that there is an unrecognized word or phrase in the place where tile omitted input should have been. For instance, in "the message over June 17", "over" takes the place of "dated" or "sent after" or whatever elst: is appropriate at that point. If the substifution is of vocabulary which is appropriate but unknown to the syslem, parsing o| substihlted words can provide tl~e basis of vocabulary extension.It is not uncommon for people to fail to make the appropriate agreement between the various parts of a noun or verb phrase as in :I wants to send a messages to Jim Smith. ]'he appropriate action is to ignore the lack of agreement, and Weischedel and Black [13J describe a melhod for relaxing the predicates in an ATN which typically check for soch agreements. However, it is generally not possible to conclude locally which value of the marker (number or person) for whicll the clash occurs is actually intended. We considered examples in which the disagreement involves more than inflections (as in "tile message over Jr,he 17") in the section on substitutions.Idioms are phrases whose interpretation is not what would be obtained by parsing and interpreting them constructively in the normal way, They may also not adllere to the standard syntactic rules. Idioms must thus be parsed as a whole in a pattern matching kind of mode. Parsers based purely oil patlern matching, like thai el PARRY I I t J, titus are able to parse idioms naturally, while others must eifher add a preprocessing phrase of pattern matchimj as in tile LUNAR system [15~. or mix specific patterns in will1 more general rules, as in Ihe work of Kwnsny and Sondheimer [10] . Semantic grammars [3, 81 provide a relatively natural way of mixing idiomatic and more general patterns.In normal hunlall conversalif}fl, once SOme;Ihing is said, it is suid and c;.tllnOt be ch,lnul.~t, excl;pt indirectly by more words wlfich refer Uack to tile original ones.In inleractively typf.~l lie)at, there is alwayS the possit)ilily thai a user nlay notice ;.in error he has made ;.ind go back an(I correcl it hmf.~(:ll, wilhoul wading for the :wstem to ptlrslle =Is own, possibly slow and inef[e(:tive, motile(Is el correction. Wilh appropriate editing lacilities, Ihe user may do this wilhoul erasing inlervening words, alld, if |he system is processing his input oil a word by word basis, may
an approach to flexible parsing:
Most current parsing systems are unable to code with most of the kinds of grammatical deviation outlined above. This is because typical parsing systems attempt to apply their grammar to Illeir input in a rigid way, and since deviant input, by defimtion, does not conform to the grammar, they are unable to produce any kind of parse for it at all. Attempts to parse more flexibly have typically involved parsing strategies to be used after a tog-down parse using an ATN It4J or similar tran~lion net has failed. thus alter a word that the system has already processed. A flexible parser must be able to take advantage of such user provided corrections to unknown words, and to prefer them over its own corrections. It must also be DreDared to change its parse if the user changes a valid word to another different but equally valid word.We have constructed a parser, FlexP. which can apply its grammar tO its input flexibly, and thus deal wdh the grammatical deviations discussed in the previotls sechon We shotdd empllas~;~e, however, that FlexP is designed to be used in thu lltturluce to a restncted-domain system AG such. it is intended to work Irom a domuilt-sDecific semantic grammar. rather titan one st.tuble Ior broader classes of input. FlexP thus does not embody a solutloll for Ilexible parsing of natural language in general. In describing FlexP. we will note those of its techoiques that seem unlikely to scale up to use with more complex grammars with wider coverage.We have adopted in FlexP an approach to flexible parsing based not on ATN's. but closer to the pattern-matching purser OI tile PARRY system [11J. possibly tim most robust parser yet constructed. Our approacl~ is based on several design decisions:• bottom up rather than top-down por~ing: This aids io the • Parsing el fragmentary utterances, un(I in the r~rxll.li¢,l nf interjechonR alld restarts.• pattern matching: 1 Ilis is essential Inr idioms, and also aids in tile ilelection n! omissions and sobsMutions in non-i(limontic phrases.• parse suspension and conli,luoiion: Thu ;tt)ilily to F.uspelld it I);Irse and letter re.~Lin|e il.'; I)rocnRsilU,| i~ illtllortant for intorlections, restarts, and non-explicit terntinolions.In the remain(ler of this section we examine and juslify these design decisions in more detail.Our choice of a bottom-up strategy is based o, our need to rocu~jnize isolated sentence Iragments. If an utterance which would normally be considered only a fragment of a complete sentence is to be recognized top-down, there are lwo approaches to take. First. the grammar can be altered so that Ihe fragment is recognized as a complete ulteraoce in its own right. This is undesirable bee;ruse it can cause enormous exp;msion of the grmnmar, and because it becomes difficult to decide whether s fragmeot appears in isolali~ or as port OIa larger utterance, especiully if the possibility of missing end of sentence markers also exists. The second option is for the purser to infer from the convers;ttidnal context what grammatical sub-category (or sequence of sub-cate(jories) the fragment might fit Dnto. and thee to do a top-down parse tram that sub-category. This essentially is tile tzlctic used in the GUS [21 and LIFER lot systems. This strutegy =s clearly better than the first one. but has two Problems; first of predicting all no.ss~ble sub-categories which might come next. and secondly, of inefficiency if a large number are predicted. Kwosr.y and Sondheimer I10] use :. combination of the two strategies by temporarily modifying an ATN grammar to accept fragment categories as complete ulterances at the braes they are contextually predicted.Pattern-uP Doming avoids the problem of predicting what sub-categories may occur. If a fragment filling a given sub-category does occur, it is ~3rsed as such whatever the context. However. if n given input can be p.'~rsed as more thon one sub-category, the bottom-up approach would llave to produce them all. even if only one would be predicted top-down. In a syslem of limited comprehension, fragmentary recognition is sometunes necessary because not all of an input con be recognized, rather tilan because el intentional ellipsis. Here. it is probably in)possible to make pte(tictloos altCI bottom-up pursing is tile ()lily toothed that is likely to work.As described below, boltom-up stnltegms, coupled with suspended purses, are also helphrl in recognizing mteqections and restarts.We have chosen to use a granlnlar of linear I);lltorns rntller thao a ITuiiSlllOn network boc;.ttl..;e palterll-nl{llChlllg ineshus well wllll I)olJoln.up purSlllg, bec;.itise it f;.1ciIitutes reco~l|lllOiI (11 UIIuI;uIcuS wilh nllli.%sioIl.~ ;|llt| SUbStitutiOnS. ;|ll(i [~3cause it is I~eces.~.ury ;.lllyw;ly l~Jr tile lecogndion oi i(tidm;itiC phrases. TIIu (.}r31lllil;.t; oJ the parser is ;.= SOt of rewrde or I)roduCtlOIt rlllt~$ whose tell h;.u)(I :role is ;.t til)(.l[il II;.l|tL=fn Of COil:;llttlHIttS (ll;XlL;;.ll ()1 hl(Ih(}l k:vel) ;tltll wllose right hand side derides a result constWJi}ot. Elenleots el the pattern may be labelled opholsal or allow for repeated matches, We make the assumption, certainly true Ior the grammar we are presently working with. that the grammar will be semantic rather than synt{tctic, with patterns corresponding tO idiemntic phrases or to object and event descriph~,ls meonulgful it) some hmitod domain, rather than to general syntactic structures.Linear patterns fit well with bottom-up parsing because they can De indexed by any of their components, and because, once indexed, it is straiglltforward to confirm wl)ether a pattern matches input already processed in a way consistent with the way II~e pattern was indexed.Patterns help with rite detection of omissions and substitutions because in either case the relevant pattern can still be indexed by the remaining elements that appear correctly in the input, and thus the pattern as a whole can be recognized even if some of its elements are missing or incorrect. In the case of substitutions, such o technique cnn actually help locus the st~011ing correction, proper name reco(jnition, or vocabulary learning techniques, whichever is appropriate, by tsolahng the substituted input and the pattern constituent which it should have matched. In effect. this allows the normally bottom-up parsing strategy to go top-down to resolve such substitutions.In normal left to right processing, it is not necessary to activate all the patterns io(lexed by every new word as it is COnSidered. If a new word is accounted lot by a pattern that has already been partKflly matclled by previous input, it is likely that no other patterns need to be indexed and mulched Io~" thai input, ll)ts heuristic Plows FlexP's pursing algorithm to limit the number of patterns it toes to ntatch. We should emphasize. however, that it is a I'.ettr|stic. and while it has caused us no trouble with the limited*domino grammar we have been using, it is unclear how well it would transfer to a more complex grmnmar. FlexP's algorithm does. however, carry along ntultii)le partial par.."~es in other alliblguOUS cases. removing tile need for any backtracking.FlexP employs the technique of suspending a Parse with the possibility el later cominualion to help with the recognition of inlerlecliofls, restartS. and implK, il termlnatio,s. Tile I}arsmg algurittun works tell to right in a t}re:tdlh-lir.qt retainer. It ntainlui=is a set of p;Irtiu! parses, each el which ~tccotlnts for Ihe input ulre~lty proces.=~..(t but riot yet accot.llod lot by .' 1 COmpleted pari.;e. The purser attempts to incorporate o~tch new input into each of Ihu P;trtial p~.~rsOs. I{ Ihis is successful, the t)artiul parses are exleniled al~l lil:ly irlcreos~ or decrease ill ittinlber. If no partial purse can be extendo~t, the entire set is ~.lVed as a SUspended parse, There are several possible explanations for input mismatch. Le. the failure o! tile nex! input tO extend a parse.• The input could be an implicit terminal=on, i.e. the start of a new top-level utterance, and the previous utterance should be assumed complete.• t he: Inp¢ll ¢util~.i b~J a reslart, m whlcll case li.e active Parse should be abandoned and a new parse starte(I Item that point.• The input could be the start of an interjection, io which case lhe actwe parse should be temporarily suspended, and a new mtrse started for the intorlection.It is not possible, in general, tO dL~tmguish between these cases at the time tim mismatch occurs. II the active parse is not at a possible termination Point. then input mismatch cannot indicate implicit termioation, but may indicate either restart or interjection. It is necessary to suspend the active parse and wuit to see if it is continued at the next input mismotclt. On the other hand. if the active parse is at a possible termination point, input mismutch does not rule out interjection or even restart. In this situation, our algorithm tentatively ussumes that there has been an implicit termination, but suspends the active parse anyway for subsequent potential continuation.Note also that tl~e possibility el implicit termination provides justification for the strategy of interpreting each input immediately it is received. If the input signals an implicit termination, then the user =nay well expect the system to respond immediately to the input thus terminated.This section describes how FlexP achieves the Sex=bit=ties discussed earlier, The implementation described is being used as the parser for an intelli(jent interface Io ;i multi-mediu message system [ 1 ] , The intelligence in this interface is cnncentrated in u tl.ser A(lent whictl =ned=sites between the user and the underlying tool System. The Agent ensures that the interaction goes smootlfly by, amoog other things, checking Ihat tile user specifies the operations he wants performed and their parameters correctly and uuumbiguously, conducting a dialogue wilh the user if prohlems arise. Th(: role el FlexP" us tile Agent's parser is to transform the user's input into the internal ropresenlutions employed by tile Agent. Us.idly this inl)ut is a re(Itlest for aclio, hy the to(ll or a description of obiects known to the tool. Our exzmq=les are drawn from that context.Interpretation begins as soon as any input is available, The first word is used us an index into the store of rewrite rides. Each rule gives a pattern and u structure to be pr=xlu(:od when lira pattern is matcherf. The components el the structure ure built from the structures or words which match the elements of the pattern. The word "display" indexes the rule: We call the partially-instantiated pattern which labels the zipper node a hWJothesis. It represents a possible interpretation lot a segment of input.The next word "new" does not directly match the hypothesis, but since "new" is a MsgAdj (an adjective which can modify a description of a message), il indexes the rule: Here. "?" means optional, and ..... means repeatable. For the sake Of clarity, we have omitted other prefixes which distinguish between terminal and non-terminul pattern elements. Tile result of this rule fits the current hypothesis, so extends the purse as follows: The third input m;.dcho.,; Ihe C;.It(~tlory M:;gl-lead (head noun el a met.sage (lest:Silltion) and so lits tile current hypothesis, This match lills the lust non-oplional slot in Ihut pattern. By doing so it makes tile current hypothesis and its parent pattern potemia/ly complete. When the parser finds a potentially complete phrase whose result is of interest to the Agent (and the parent phrase in this example is in that category), the result is constructed and sent. However. since the p;irs~,r has not seen a lomlination signal, this purse is kepl u(.,hvu. Ihu iiq)ut 5,;us su lur may be only a prefix Ior some longer utterance such as "display new messages about ADA". In this case "ubout ADA'" would be recognized as a match for MsgCase (a prepositional phrase that can be part of a message description), the purse would be extended, and a revision of the previous slructul'e sent to the Agent.
unrecognized words:
When an input word cannot be found in the dictionary, spelling correction is attempted in a background process which runs at lower priority than the parser, 1"he input word and a list at possibilities derived front the current hypothesis are passed as arguments. In some cases the spelling correcter produces several likely alternatives. The parser handles such alnhiguous words using the same mecllanisms which ucconlmotlate phrases with ambiguous interpretations ]'hut is. ulternative interpretations are curried altJng until Ihere is enough input to discriminate those which are pla.sible from those which are not. | lie d~.,tails ira: given in the n~:xt section.The user inuy also corrl:ct Ihe input taxi himself, These changes are hundle~l in ilnlch the S;llno way as those proposed by Ihe spellillg correcter. Of course, thes~ u'.~.r-suppliot ch;ingos ure given priority, and Ililrs=..'s built u~.allg Ihe formal ver'.;iun musI lxJ mlv.lili.~l or discarded.Spellimj correction is run as a separate, lower priority process because it reusonublo parse may be produced even without a proper interpretation for the unknown word. Since spelling correction can involve rather time-consuming searches, this work is best done when the parser has.no better alternatives to explore.In the first example there was only one I~ypothesis about the structure Of the input. More generally, there may be several hypotheses which I)rovide competing interprelutions uboul what has already been seen and whal will appear =text. Until these p~lrtial parses are Iound to be inconsistent with the actual input, they are carried along as part of the ~zctive purse. Therefore the active parse is a set at partial purse trees each efficiency required for real-time response, but could conceivably fail to find appropriate parses. We have not encountered such circumstances wilh tile s=nall domain-specitic semantic grammar we have been using.rl+e oaly Ilexibiltty described so lar *s that allowed by the optional elements el patterns, II om~ssions can be anttcipLIte(I, allowances trlay be built Ilil(= the grammar. In Ihi$ sechon we show how other OlnissiOI1S may h~ lUllittl(;~t ;tnlt Olhee Ilexitiililles achit=ved by ~j|low,ncj ;t(J('liliontil freHtlom in the wtw an item is allowed tO matcI1 a pattern. Ihere are two ways in with a top-level Ilypothesis about the overall structure at the input so far anti a curr~nt hyl)othesis concerning the next input.The actual mlplementation allows sharln(j of COnln)OII structure alnOllg competing hypotheses and so =S more ollic=ent than this descnption suggests. AS ~ tjeltur[tl str:.ltegy, we carry seVel :.11 linssitile inlerl)retallOltS only as kintj ;I.~ thert! is 11o clear lit;st ;.lllernalive. II1 l):.lrlictllar r'~o fh~xible parsing| t*.,chniqueS are us~t to suttl)ort parses Ior which th,.=re are pl-'tuszblo ;alternatives tmt|or normal imrsing.This heuristic helps achieve 11)0 wlllch the malching crilerla may be relaxed, namely• relax consistency constraints, e.g. number agreement• allow out Of order matches Consff;lency constraints are predicates which are attached to rules. They assert relationships which must hold among the items which till the pattern Fhese constraints allow contexl-sensilive constructions in the gramnmr. Such predicates are commonly used for simdar purposes by ATN parsers 1!41 and the flexibility achieved by relaxmg these constraints has been explored belore 113J. The tochmque fits smoothly into FlexP but has no1 ;icttJally been needed or used in our current application. First. previously skioPed elements are compared to the input. In this example, the element ?Pet is considered but does not match. Next, elements to the right of the eligible elements are considered. Thus MsgCase is considered even though the non-optional element MsgHead has not been matched. This succeeds and allows the partial parse to be extended to Unreeocjnizable substitutions are also handled by this mechanism. In the pll ra.se display the new stuff aboul ADA the word "stuff" iS not found in the dictionary so spelling correction is tned but does not produce any plausible alternatives. While spelling correction =s underway, the remaining spurs can be parsed by siml~y omlthng "stuff" and using the flexible matching proce<hJre. Tr;.lnspo31llOlIS :.ire handlEKI Ihrough one applic-'~llofl el Ilexible matching if Iho elemenl of the IransposL'<l pair is option~d, two applic;.tlions if not.h'lteri~.~:;tions are inore colnll~on in spoken than in wl ;ell language but do at:cur if= lyp(~t input sglnOltlnes. To deal wdh such ,1put, out design allows lot blocked patios tO be suspended rtllher than merely discarded.Users. especially novices. =nay embellish their inpul will1 words and phrases that do r',~t provide essential information and cannot be specifically anl,clpalet+ Consider t.vo examines: display please massages dated June 17 disl~ay Ior me messages dated June 17In the first case. the ml~.rjected word "please" could be recognized as a r:.mnmon noise phrase wI.ch means nothing to the Agent except possibly to suggust that the user is a nowce. The second example is more difficult. Both words of the interjected phrase can appear in a num0er of legitimate and me~lnu'lghJI constru+;h(.a.'~: they cannot be ignored so easily.For the latter example, parse suspension works us follows. After the first word, the active parse contains a single partial parse: The next word confirms the first of these, hut the fourth word "messages" does not. When the Darser finds that it cannot extend the active parse, it considers the suspended parse. Since "messages" fits, the active and suspended parses are exchanged anti the remainder of the input processed normally, so that the parser recognizes "display messages dated June 17" as if it had never contained "for me".
conclusion:
When peDDle use language naturally, they make mistakes and employ economies of expression that. allen result in language which is ungrammalical by strict standards. In particular, such grammatical deviations will inp.vilabty occur in the inpul of a computer syslem which allows its user Io elnploy nalural langua¢.le. Such a computer system must, Ihert~.ior¢:, I}o p,t~l);Lrt~H to I)arsH its input nexibly, if il is avoid Irt=slration for its user.ht this paper, we have attemple'(J Io outline the main kinds of flexibility a nc'ttural I;.tnguage parsur intended for ~att=ral use sltouk| provide. We also describod a bottom-up pattern-matching parser, FloxP, which exhibits these Iloxibilities, and wllicl~ is suitable for restricted natural language input to a limited-domain system.
Appendix:
| null | null | null | null | {
"paperhash": [
"kwasny|relaxation_techniques_for_parsing_grammatically_ill-formed_input_in_natural_language_understanding_systems",
"ball|representation_of_task-specific_knowledge_in_a_gracefully_interacting_user_interface",
"weischedel|responding_intelligently_to_unparsable_inputs",
"chester|a_parsing_algorithm_that_extends_phrases",
"hayes|graceful_interaction_in_man-machine_communication",
"carbonell|towards_a_self-extending_parser",
"kwasny|ungrammaticality_and_extra-grammaticality_in_natural_language_understanding_systems",
"waltz|an_english_language_question_answering_system_for_a_large_relational_database",
"erman|hearsay-ii._tutorial_introduction_and_retrospective_view",
"herdrix|human_engineering_fcr_applied_natural_language_processing",
"scha|semantic_grammar:_an_engineering_technique_for_constructing_natural_language_understanding_systems",
"carbonell|subjective_understanding,_computer_models_of_belief_systems",
"hendrix|human_engineering_for_applied_natural_language_processing",
"aho|the_theory_of_parsing,_translation,_and_compiling"
],
"title": [
"Relaxation Techniques for Parsing Grammatically Ill-Formed Input in Natural Language Understanding Systems",
"Representation of Task-Specific Knowledge in a Gracefully Interacting User Interface",
"Responding Intelligently to Unparsable Inputs",
"A Parsing Algorithm that Extends Phrases",
"Graceful Interaction in Man-Machine Communication",
"Towards a Self-Extending Parser",
"Ungrammaticality and Extra-Grammaticality in Natural Language Understanding Systems",
"An English language question answering system for a large relational database",
"Hearsay-II. Tutorial Introduction and Retrospective View",
"Human engineering fcr applied natural language processing",
"Semantic grammar: an engineering technique for constructing natural language understanding systems",
"Subjective understanding, computer models of belief systems",
"Human Engineering for Applied Natural Language Processing",
"The Theory of Parsing, Translation, and Compiling"
],
"abstract": [
"This paper investigates several language phenomena either considered deviant by linguistic standards or insufficiently addressed by existing approaches. These include co-occurrence violations, some forms of ellipsis and extraneous forms, and conjunction. Relaxation techniques for their treatment in Natural Language Understanding Systems are discussed. These techniques, developed within the Augmented Transition Network (ATN) model, are shown to be adequate to handle many of these cases.",
"Command interfaces to current interactive systems often appear inflexible and unfriendly to casual and expert users alike. We are constructing an interface that will behave more cooperatively (by correcting spelling and grammatical errors, asking the user to resolve ambiguities in subparts of commands, etc.). Given that present-day interfaces often absorb a major portion of implementation effort, such a gracefully interacting interface can only be practical if it is independent of the specific tool or functional subsystem with which it is used. \n \nOur interface is tool-independent in the sense that all its information about a particular tool is expressed in a declarative tool description. This tool description contains schemas for each operation that the tool can perform, and for each kind of object known to the system. The operation schemas describe the relevant parameters, their types and defaults, and the object schemas give corresponding structural descriptions in terms of defining and derived subcomponents. The schemas also include input syntax, display formats, and explanatory text. We discuss how these schemas can be used by the tool-independent interface to provide a graceful interface to the tool they describe.",
"All natural language systems are likely to receive inputs for which they are unprepared. The system must be able to respond to such inputs by explicitly indicating the reasons the input could not be understood, so that the user will have precise information for trying to rephrase the input. If natural language communication to data bases, to expert consultant systems, or to any other practical system is to be accepted by other than computer personnel, this is an absolute necessity.This paper presents several ideas for dealing with parts of this broad problem. One is the use of presupposition to detect user assumptions. The second is relaxation of tests while parsing. The third is a general technique for responding intelligently when no parse can be found. All of these ideas have been implemented and tested in one of two natural language systems. Some of the ideas are heuristics that might be employed by humans; others are engineering solutions for the problem of practical natural language systems.",
"It is desirable for a parser to be able to extend a phrase even after it has been combined into a larger syntactic unit. This paper presents an algorithm that does this in two ways, one dealing with \"right extension\" and the other with \"left recursion\". A brief comparison with other parsing algorithms shows it to be related to the left-corner parsing algorithm, but it is more flexible in the order that it permits phrases to be combined. It has many of the properties of the sentence analyzers of Marcus and Riesbeck, but is independent of the language theories on which those programs are based.",
"Compared to humans, current natural language dialogue systems often behave in a rigid and fragile manner when their conversations deviate from a narrowly conceived mainstream, e.g. when faced with ungrammatical, unclear, or unrecognizable input, ambiguous descriptions, or requests for clarification of their own output. We believe that the time is now ripe to construct systems which can interact gracefully with their users when such contingencies arise. Graceful interaction is not a single skill, but a combination of several diverse abilities. We list these components, and describe one of them - the ability to communicate robustly. Detailed descriptions of all the components appear in [4], along with details of a system architecture for their integrated Implementation.",
"This paper discusses an approach to incremental learning in natural language processing. The technique of projecting and integrating semantic constraints to learn word definitions is analyzed as implemented in the POLITICS system. Extensions and improvements of this technique are developed. The problem of generalizing existing word meanings and understanding metaphorical uses of words is addressed in terms of semantic constraint integration.",
"Among the components included in Natural Language Understanding (NLU) systems is a grammar which spec i f i es much o f the l i n g u i s t i c s t ruc tu re o f the ut terances tha t can be expected. However, i t is ce r ta in tha t inputs that are ill-formed with respect to the grammar will be received, both because people regularly form ungra=cmatical utterances and because there are a variety of forms that cannot be readily included in current grammatical models and are hence \"extra-grammatical\". These might be rejected, but as Wilks stresses, \"...understanding requires, at the very least, ... some attempt to interpret, rather than merely reject, what seem to be ill-formed utterances.\" [WIL76]",
"By typing requests in English, casual users will be able to obtain explicit answers from a large relational database of aircraft flight and maintenance data using a system called PLANES. The design and implementation of this system is described and illustrated with detailed examples of the operation of system components and examples of overall system operation. The language processing portion of the system uses a number of augmented transition networks, each of which matches phrases with a specific meaning, along with context registers (history keepers) and concept case frames; these are used for judging meaningfulness of questions, generating dialogue for clarifying partially understood questions, and resolving ellipsis and pronoun reference problems. Other system components construct a formal query for the relational database, and optimize the order of searching relations. Methods are discussed for handling vague or complex questions and for providing browsing ability. Also included are discussions of important issues in programming natural language systems for limited domains, and the relationship of this system to others.",
"Abstract : The Hearsay-2 system, developed at CMU as part of the five-year ARPA speech-understanding project, was successfully demonstrated at the end of that project in September 1976. This report reprints two Hearsay II papers which describe and discuss that version of the system: The 'Hearsay-2 System: A Tutorial', and 'A Retrospective View of the Hearsay-2 Architecture'. The first paper presents a short introduction to the general Hearsay-2 structure and describes the September 1976 configuration of knowledge-sources; it includes a detailed description of an utterance being recognized. The second paper discusses the general Hearsay-2 architecture and some of the crucial problems encountered in applying that architecture to the problem of speech understanding.",
"Human engineering features for enhancing the usability of practical natural language systems are described. Such features include spelling correction, processing of incomplete (elliptical) input?, of the underlying language definition through English queries, and their ability for casual users to extend the language accepted by the system through the use of synonyms and peraphrases. All of the features described are incorporated in LJFER, -\"applications-oriented system for creating natural language interfaces between computer programs and casual USERS LJFER's methods for the mroe complex human engineering features presented.",
"One of the major stumbling blocks to more effective used computers by naive users is the lack of natural means of communication between the user and the computer system. This report discusses a paradigm for constructing efficient and friendly man-machine interface systems involving subsets of natural language for limited domains of discourse. As such this work falls somewhere between highly constrained formal language query systems and unrestricted natural language under-standing systems. The primary purpose of this research is not to advance our theoretical under-standing of natural language but rather to put forth a set of techniques for embedding both semantic/conceptual and pragmatic information into a useful natural language interface module. Our intent has been to produce a front end system which enables the user to concentrate on his problem or task rather than making him worry about how to communicate his ideas or questions to the machine.",
"Abstract : Modeling human understanding of natural language requires a model of the processes underlying human thought. No two people think exactly alike; different people subscribe to different beliefs and are motivated by different goals in their activities. A theory of subjective understanding has been proposed to account for subjectively-motivated human thinking ranging from ideological belief to human discourse and personality traits. A process-model embodying this theory has been implemented in a computer system, POLITICS. POLITICS models human ideological reasoning in understanding the natural language text of international political events. POLITICS can model either liberal or conservative ideologies. Each ideology produces a different interpretation of the input event. POLITICS demonstrates its understanding by answering questions in natural language question-answer dialogs.",
"Human engineering features for enhancing the usabil ity of practical natural language systems a l re described. Such features include spelling correction, processing of incomplete (ell ipt ic-~I) input?, jntfrrog-t ior of th p underlying language definition through English oueries, and ?r rbil.it y for casual users to extrnd the language accepted by the system through the-use of synonyms ana peraphrases. All of 1 h* features described are incorporated in LJFER,-\"n r ppl ieat ions-orj e nlf d system for 1 creating natural language j nterfaees between computer programs and casual USERS LJFER's methods for r<\"v] izir? the mroe complex human enginering features ? re presented. 1 INTRODUCTION This pape r depcribes aspect r of a n applieations-oriented system for creating natural langruage interfaces between computer software and Casual users. Like the underlying researen itself, the paper is focused on the human engineering involved in designing practical rnd comfortable interfaces. This focus has lead to the investigation of some generally neglected facets of language processing, including the processing of Ireomplfte inputs, the ability to resume parsing after recovering from spelling errors and the ability for naive users to input English stat.emert s at run time that, extend and person-lize the language accepted by the system. The implementation of these features in a convenient package and their integration with other human engineering features are discussed. There has been mounting evidence that the current state of the art in natural language processing, although still relatively primitive, is sufficient for dealing with some very real problems. For example, Brown and Burton (1975) have developed a usable system for computer assisted instruction, and a number of language systems have been developed for interfacing to data bases, including the REL system developed by Thompson and Thompson (1975), the LUNAR system of Woods et al. (1972), and the PLANES system ol Walt7 (1975). The SIGART newsletter for February, 1977, contains a collection cf 5? short overviews of research efforts in the general area of natural language interfaces. Tnere has rise been a growing demand for application systems. At SRi's Artificial Irtellugene Center alone, many programs are ripe for the addition of language capabilities, Including systems for data base accessing, industrial automation, automatic programming, deduct ior, and judgmental reasoning. The appeal cf these systems to builders ana users .-'like is greatly enhanced when they are able to accept natural language inputs. B. The LIFER SYSTEM To add …",
"From volume 1 Preface (See Front Matter for full Preface) \n \nThis book is intended for a one or two semester course in compiling theory at the senior or graduate level. It is a theoretically oriented treatment of a practical subject. Our motivation for making it so is threefold. \n \n(1) In an area as rapidly changing as Computer Science, sound pedagogy demands that courses emphasize ideas, rather than implementation details. It is our hope that the algorithms and concepts presented in this book will survive the next generation of computers and programming languages, and that at least some of them will be applicable to fields other than compiler writing. \n \n(2) Compiler writing has progressed to the point where many portions of a compiler can be isolated and subjected to design optimization. It is important that appropriate mathematical tools be available to the person attempting this optimization. \n \n(3) Some of the most useful and most efficient compiler algorithms, e.g. LR(k) parsing, require a good deal of mathematical background for full understanding. We expect, therefore, that a good theoretical background will become essential for the compiler designer. \n \nWhile we have not omitted difficult theorems that are relevant to compiling, we have tried to make the book as readable as possible. Numerous examples are given, each based on a small grammar, rather than on the large grammars encountered in practice. It is hoped that these examples are sufficient to illustrate the basic ideas, even in cases where the theoretical developments are difficult to follow in isolation. \n \nFrom volume 2 Preface (See Front Matter for full Preface) \n \nCompiler design is one of the first major areas of systems programming for which a strong theoretical foundation is becoming available. Volume I of The Theory of Parsing, Translation, and Compiling developed the relevant parts of mathematics and language theory for this foundation and developed the principal methods of fast syntactic analysis. Volume II is a continuation of Volume I, but except for Chapters 7 and 8 it is oriented towards the nonsyntactic aspects of compiler design. \n \nThe treatment of the material in Volume II is much the same as in Volume I, although proofs have become a little more sketchy. We have tried to make the discussion as readable as possible by providing numerous examples, each illustrating one or two concepts. \n \nSince the text emphasizes concepts rather than language or machine details, a programming laboratory should accompany a course based on this book, so that a student can develop some facility in applying the concepts discussed to practical problems. The programming exercises appearing at the ends of sections can be used as recommended projects in such a laboratory. Part of the laboratory course should discuss the code to be generated for such programming language constructs as recursion, parameter passing, subroutine linkages, array references, loops, and so forth."
],
"authors": [
{
"name": [
"S. Kwasny",
"N. Sondheimer"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"Eugene Ball",
"P. Hayes"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"R. Weischedel",
"J. Black"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"D. Chester"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"P. Hayes",
"R. Reddy"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"J. Carbonell"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"S. Kwasny",
"N. Sondheimer"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"D. Waltz"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"L. Erman",
"V. Lesser"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"Gary G. Herdrix"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"R. J. H. Scha"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"J. Carbonell"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"G. Hendrix"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"A. Aho",
"J. Ullman"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
}
],
"arxiv_id": [
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null
],
"s2_corpus_id": [
"181820",
"14224604",
"18828496",
"355666",
"3237422",
"16742497",
"12695499",
"18227465",
"59900183",
"59814145",
"263227606",
"142895805",
"5436772",
"60775129"
],
"intents": [
[
"methodology"
],
[],
[
"methodology"
],
[
"background"
],
[
"background"
],
[
"background"
],
[],
[],
[
"background"
],
[
"background",
"methodology"
],
[],
[
"methodology"
],
[],
[
"background"
]
],
"isInfluential": [
true,
false,
false,
false,
false,
false,
false,
false,
false,
true,
false,
false,
false,
false
]
} | Problem: When people use natural language in natural settings, they often use it ungrammatically, missing out or repeating words, breaking off and restarting, speaking in fragments, etc.
Solution: In this paper, we propose that for a computer system to accept natural language input from users routinely, it must display a similar indifference to grammatical deviations. We outline a set of parsing flexibilities that such a system should provide, and describe FlexP, a bottom-up pattern-matching parser designed to accommodate restricted natural language input to a limited-domain computer system. | 536 | 0.166045 | null | null | null | null | null | null | null | null |
bdb956df909bf3e7a2c3e48fa472416e4ef34563 | 41770670 | null | If The Parser Fails | The unforgiving nature of natural language components when someone uses an unexpected input has recently been a concern of several projects. For instance, Carbonell (1979) discusses inferring the meaning of new words. Hendrix, et.al. (1978) describe a system that provides a means for naive users to define personalized paraphrases and that lists the items expected next at a point where the parser blocks. Weischedel, et.al. (1978) show how to relax both syntactic and semantic constraints such that some classes of ungrammatical or semantically inappropriate input are understood. Kwasny aod Sondheimer (1979) present techniques for understanding several classes of syntactically ill-formed input. Codd, et.al. (1978) and Lebowitz (1979) present alternatives to top-down, left-to-right parsers as a means of dealing with some of these problems. | {
"name": [
"Weischedel, Ralph M. and",
"Black, John E."
],
"affiliation": [
null,
null
]
} | null | null | 18th Annual Meeting of the Association for Computational Linguistics | 1980-06-01 | 8 | 9 | null | null | null | null | This paper presents heuristics for responding to inputs that cannot be parsed even using the techniques referenced in the last paragraph for relaxing syntactic and semantic constraints. The paper concentrates on the results of an experiment testing our heuristics.We assume only that the parser is written in the ATN formalism. In this method, the parser writer must assign a sequence of condition-action pairs for each state of the ATN. If no parse can be found, the condition-action pairs of the last state of the path that progressed furthest through the input string are used to generate a message about the nature of the problem, the interpretation being followed, and what was expected next. The conditions may refer to any ATN register, the input string, or any computation upon them (even semantic ones). The actions can include any computation (even restarting the parse after altering the unparsed portion) and can generate any responses to the user. These heuristics were tested on a grammar which uses only syntactic information. We constructed test data such that one sentence would block at each of the 39 states of the ATN where blockage could occur. In only 3 of the 39 cases did the parser continue beyond the point that was the true source of the parse failing.From the tests, it was clear that the heuristics frequently pinpointed the exact cause of the block. However, the response did not always convey that precision to the user due to the technical nature of the grammatical cause of the blockage. Even though the heuristics correctly selected one state in the overwhelming majority of cases, frequently there were several possible causes for blocking at a given state.Another aspect of our analysis was the computational and developmental costs for adding these heuristics to a parser. Clearly, only a small fraction of the parsing time and memory usage is needed to record the longest partial parse and generate messages for the last state on it. Significant effort is required of the grammar writer to devise the condition-action pairs. However, such analysis of the grammar certainly adds to the programmer's understanding of the grammar, and the condition-action pairs provide significant documentation "This work was supported by the University of Delaware Research Foundation, Inc.• "This work was performed while John Black was with the Dept. of Computer & Infor~nation Sciences, University of Delaware. of the grammar. Only one page of program code and nine pages of constant character strings for use in messages were added.From the experiment we conclude the following: I. The heuristics are powerful for small natural language front ends to an application domain.2. The heuristics should also be quite effective in a compiler, where parsing is far more deterministic.3. The heuristics will be more effective in a semantic grammar or in a parser which frequently interacts with a semantic component to guide it.We will be adding condition-action pairs to the states of the RUS parser (Bobrow, 1978) and will add relaxation techniques for both syntactic and semantic constraints as described in Weischedel, et.al. (1978) and Kwasny and Sondheimer (1979) . The purpose is to test the effectiveness of paraphrasing partial semantic interpretations as a means of explaining the interpretation being followed. Furthermore, Bobrow (1978) indicates that semantic guidance makes the RUS parser significantly more deterministic; we wish to test the effect of this on the ability of our heuristics to pinpoint the nature of a block. | null | Main paper:
:
This paper presents heuristics for responding to inputs that cannot be parsed even using the techniques referenced in the last paragraph for relaxing syntactic and semantic constraints. The paper concentrates on the results of an experiment testing our heuristics.We assume only that the parser is written in the ATN formalism. In this method, the parser writer must assign a sequence of condition-action pairs for each state of the ATN. If no parse can be found, the condition-action pairs of the last state of the path that progressed furthest through the input string are used to generate a message about the nature of the problem, the interpretation being followed, and what was expected next. The conditions may refer to any ATN register, the input string, or any computation upon them (even semantic ones). The actions can include any computation (even restarting the parse after altering the unparsed portion) and can generate any responses to the user. These heuristics were tested on a grammar which uses only syntactic information. We constructed test data such that one sentence would block at each of the 39 states of the ATN where blockage could occur. In only 3 of the 39 cases did the parser continue beyond the point that was the true source of the parse failing.From the tests, it was clear that the heuristics frequently pinpointed the exact cause of the block. However, the response did not always convey that precision to the user due to the technical nature of the grammatical cause of the blockage. Even though the heuristics correctly selected one state in the overwhelming majority of cases, frequently there were several possible causes for blocking at a given state.Another aspect of our analysis was the computational and developmental costs for adding these heuristics to a parser. Clearly, only a small fraction of the parsing time and memory usage is needed to record the longest partial parse and generate messages for the last state on it. Significant effort is required of the grammar writer to devise the condition-action pairs. However, such analysis of the grammar certainly adds to the programmer's understanding of the grammar, and the condition-action pairs provide significant documentation "This work was supported by the University of Delaware Research Foundation, Inc.• "This work was performed while John Black was with the Dept. of Computer & Infor~nation Sciences, University of Delaware. of the grammar. Only one page of program code and nine pages of constant character strings for use in messages were added.From the experiment we conclude the following: I. The heuristics are powerful for small natural language front ends to an application domain.2. The heuristics should also be quite effective in a compiler, where parsing is far more deterministic.3. The heuristics will be more effective in a semantic grammar or in a parser which frequently interacts with a semantic component to guide it.We will be adding condition-action pairs to the states of the RUS parser (Bobrow, 1978) and will add relaxation techniques for both syntactic and semantic constraints as described in Weischedel, et.al. (1978) and Kwasny and Sondheimer (1979) . The purpose is to test the effectiveness of paraphrasing partial semantic interpretations as a means of explaining the interpretation being followed. Furthermore, Bobrow (1978) indicates that semantic guidance makes the RUS parser significantly more deterministic; we wish to test the effect of this on the ability of our heuristics to pinpoint the nature of a block.
Appendix:
| null | null | null | null | {
"paperhash": [
"kwasny|ungrammaticality_and_extra-grammaticality_in_natural_language_understanding_systems",
"carbonell|towards_a_self-extending_parser",
"lebowitz|reading_with_a_purpose",
"hendrix|developing_a_natural_language_interface_to_complex_data"
],
"title": [
"Ungrammaticality and Extra-Grammaticality in Natural Language Understanding Systems",
"Towards a Self-Extending Parser",
"Reading With a Purpose",
"Developing a natural language interface to complex data"
],
"abstract": [
"Among the components included in Natural Language Understanding (NLU) systems is a grammar which spec i f i es much o f the l i n g u i s t i c s t ruc tu re o f the ut terances tha t can be expected. However, i t is ce r ta in tha t inputs that are ill-formed with respect to the grammar will be received, both because people regularly form ungra=cmatical utterances and because there are a variety of forms that cannot be readily included in current grammatical models and are hence \"extra-grammatical\". These might be rejected, but as Wilks stresses, \"...understanding requires, at the very least, ... some attempt to interpret, rather than merely reject, what seem to be ill-formed utterances.\" [WIL76]",
"This paper discusses an approach to incremental learning in natural language processing. The technique of projecting and integrating semantic constraints to learn word definitions is analyzed as implemented in the POLITICS system. Extensions and improvements of this technique are developed. The problem of generalizing existing word meanings and understanding metaphorical uses of words is addressed in terms of semantic constraint integration.",
"A newspaper story about terrorism, war, politics or football is not likely to be read in the same way as a gothic novel, college catalog or physics textbook. Similarly, tne process used to understand a casual conversation is unlikely to be the same as the process of understanding a biology lecture or TV situation comedy. One of the primary differences amongst these various types of comprehension is that the reader or listener will nave different goals in each case. The reasons a person nan for reading, or the goals he has when engaging in conversation wlll nave a strong affect on what he pays attention to, how deeply the input is processed, and what information is incorporated into memory. The computer model of understanding described nere addresses the problem of using a reader's purpose to assist in natural language understanding. This program, the Integrated Partial Parser (IPP) ~s designed to model the way people read newspaper stories in a robust, comprehensive, manner. IPP nan a set of interests, much as a human reader does. At the moment it concentrates on stories about International violence and terrorism.",
"Aspects of an intelligent interface that provides natural language access to a large body of data distributed over a computer network are described. The overall system architecture is presented, showing how a user is buffered from the actual database management systems (DBMSs) by three layers of insulating components. These layers operate in series to convert natural language queries into calls to DBMSs at remote sites. Attention is then focused on the first of the insulating components, the natural language system. A pragmatic approach to language access that has proved useful for building interfaces to databases is described and illustrated by examples. Special language features that increase system usability, such as spelling correction, processing of incomplete inputs, and run-time system personalization, are also discussed. The language system is contrasted with other work in applied natural language processing, and the system's limitations are analyzed."
],
"authors": [
{
"name": [
"S. Kwasny",
"N. Sondheimer"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"J. Carbonell"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"Michael Lebowitz"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"G. Hendrix",
"E. Sacerdoti",
"Daniel Sagalowicz",
"Jonathan Slocum"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
}
],
"arxiv_id": [
null,
null,
null,
null
],
"s2_corpus_id": [
"12695499",
"16742497",
"6881568",
"15391397"
],
"intents": [
[
"background"
],
[
"background"
],
[
"background"
],
[]
],
"isInfluential": [
false,
false,
false,
false
]
} | - Problem: The paper addresses the challenge of responding to unexpected inputs in natural language processing, specifically focusing on cases where syntactic and semantic constraints are relaxed to understand ungrammatical or semantically inappropriate input.
- Solution: The paper proposes heuristics for handling unparsable inputs by utilizing condition-action pairs in the ATN formalism, which generate messages to pinpoint the cause of parsing failure. The effectiveness of these heuristics is tested through experiments, demonstrating their power in small natural language front ends and potential applications in compilers and semantic grammars. | 536 | 0.016791 | null | null | null | null | null | null | null | null |
645d36f6b3142f346d4b31a5bbee3bc42d3b7759 | 6749590 | null | Word and Object in Disease Descriptions | Our second experiment investigated the co-occurrence properties of some medical terms. Aware that many medical diagnostic programs have assumed attribute independence, we sought to shed light on the appropriateness of the assumption by evaluating it in terms of word cooccurrence in disease definitions. 2865 in 492 possible 360 acute 2485 p~msibly 489 se~re 368 years 2315 wir.~ 470 ~St 358 ~silure 2104 course 473 disease 349 ~et~m 2010 to 457 pr~sura 349 large 1953 or 447 a~sence 341 o~spnea 1488 by 44~ trau~e 341 early 1379 usually 443 chronic 34e ~akness 1194 lain 442 edema 339 nausea 900 as 435 ~rcent 338 ~enderness 945 on 434 ~rea~ment 337 infl,emmm~icm 889 from 432 vomitlr~ 337 mass 812 infection 431 later 336 awe 766 features 426 ah-ent 335 w~hLn 749 unknown 422 camll~n 332 Lf 738 at 421 asymp~oma~Lc 331 lower 716 cells 420 durlr~j 328 ~ellinq 699 associated 415 rarely 327 necrosL~ 682 increased 414 hereditary 325 los "Ave 674 onset: 401 lesLons 324 heaOaehe 666 ~ssue 396 ~han 318 frequent 650 bloo~ ~90 a~bominal 316 w~c 627 nor~l 389 more 315 area 619 sKLn 389 often 313 hemrrhac)e 603 and ]83 into 313 infil~ra~Iom 596 ~or ]82 ~pe 309 oh~r.cuccion 575 rare 381 ~one 304 fom | {
"name": [
"Blois, M. S. and",
"Sherertz, D. D. and",
"Tuttle, M. S."
],
"affiliation": [
null,
null,
null
]
} | null | null | 18th Annual Meeting of the Association for Computational Linguistics | 1980-06-01 | 1 | 3 | null | The vocabulary employed consists of about 19,000 distinct "words" (determined by a lexical definition), roughly divided equally between common English words and medical terms.We measured word frequency by "disease occurrence", (the number of disease definitions in which a given word occurs one or more times).By this measure, only seven words occurred in more than half the disease definitions, and about 40% of the vocabulary occurred in only a single disease definition.( Table i lists the words at the top of the frequency list together with the number of occurrences.)Assisted by the facilities of the TMuNIX operating system, we created a series of inverted files (from a magnetic tape of the CMIT text), and developed a set of interactive programs to form a word-and-context query system.This system has enabled us to study the problem of inferring term reference in this large sample of text (some 333,000 word occurrences), within the context of diseases.An interesting early result was the ease with which many medical terms could be algorithmically separated from co~on English words.After adjusting for the fact that some disease categories are larger than others, we defined an entropy-like measure of the distribution of word occurrences over the eleven physiological categories as a measure of category specificity.We reasoned that some medical terms such as 'murmur', while not specific to any particular heart disease, are specific to heart disease generally.This term would not, for example, be used in describing endocrine disorders. Such a word would be expected to occur in category 04 (cardiovascular disease) frequently, and not in the other categories.Such a term would, by our measure, have a low 'entropy'.A com~non English word like 'of', would be used in the descriptions of all kinds of disease, and would accordingly have a high 'entropy'. Tables 2 and 3 show the top and bottom of the list of all words occurring in two or more diseases sorted by this entropy measure.In these lists, as our hypothesis seems to imply, low 'entropy' corresponds to high 'specificity', and high 'entropy' to low 'specificity'. This separation of medical terms from common English words, by algorithmic means, is facilitated by the context supplied by the notion of 'disease category', and the fact that this was represented in the CMIT text.Since the previously described procedure had given us a means of selecting medical terms from common English words, it was possible to produce lists of 'pure' medical terms.We then wrote a program which formed all pairs of such terms (ignoring order).We defined an 'association measure' (A) which measured the difference between the observed co-occurrences of term-pairs (they could co-occur in any location in the definition and in either order), and the co-occurrences expected from chance alone. Tables 4 and 5 show the top and bottom of a list of all pairs formed from the low entropy terms in the previous experiment.The first 1120 terms were chosen, that is, those having an entropy of 2.0 napiers cr less.The pair list was then sorted by this association measure, A.Word pairs which are found to be highly associated, appear to do so for two reasons.The test, which is trivial, is that some word pairs are semantically one word despite their being lexically, two. Comon examples would be 'white House' and 'Hong Kong'; medical examples are 'vital capacity', 'axis deviation', and 'slit lamp'.These could have been avoided algorithmically by not taking adjacent words in forming the termpairs, without any significant overall effect. The second reasons for high frequency word co-occurrence is that both words are causally related through underlying physiological mechanisms.It is these which had the greatest interest for us, and the measure A, may be viewed as a measure of the non-independence of the symptoms or signs themselves.The term pairs which are negatively associated, have this property for the same reason.If the two terms are used typically in the descriptions of different diseases, they are less likely to co-occur than by chance.(In a baseball story on the sports page, we would not find 'pass', 'punt', or 'tackle').These negatively associated pairs may have value in diagnostic programs for the recognition of two or more diseases in a given patient, a problem not satisfactorily dealt with by even the most sophisticated of current programs.Finally, an extension of the entropy concept permits one to generate (algorithmically) the vocabularies used by the medical specialties (which correspond to the disease categories represented in CMIT. This is done by assigning terms which occur predominantly in one category to a single vocabulary and then sorting by entropy. Tables 6 and 7 show the vocabularies used in dermatology and gastroenterology (as derived from CMIT).These vocabularies, it will be noted, can be used as 'hit lists' for the purpose of recognizing the content of medical texts.In su~nary, we see the ability to differentiate medical terms from common words by context, and the ability to relate the medical words by meaning, as two of the first steps toward text processing algorithms that preserve and can manipulate the semantic content of words in medical texts. .LO ,16 .~)9 .09 ,o~ ,t~9 .O9 .ng ,09 .lu to ZOU. .03 (IIU) Bo,i-vlniriculat .lZ (381) .UJ (9() bone-v4~inil .12 (SH|) .05 (15t*) bone-(c;.L2 (36|) .HZ 646one-ceivtx | null | null | null | null | Main paper:
:
The vocabulary employed consists of about 19,000 distinct "words" (determined by a lexical definition), roughly divided equally between common English words and medical terms.We measured word frequency by "disease occurrence", (the number of disease definitions in which a given word occurs one or more times).By this measure, only seven words occurred in more than half the disease definitions, and about 40% of the vocabulary occurred in only a single disease definition.( Table i lists the words at the top of the frequency list together with the number of occurrences.)Assisted by the facilities of the TMuNIX operating system, we created a series of inverted files (from a magnetic tape of the CMIT text), and developed a set of interactive programs to form a word-and-context query system.This system has enabled us to study the problem of inferring term reference in this large sample of text (some 333,000 word occurrences), within the context of diseases.An interesting early result was the ease with which many medical terms could be algorithmically separated from co~on English words.After adjusting for the fact that some disease categories are larger than others, we defined an entropy-like measure of the distribution of word occurrences over the eleven physiological categories as a measure of category specificity.We reasoned that some medical terms such as 'murmur', while not specific to any particular heart disease, are specific to heart disease generally.This term would not, for example, be used in describing endocrine disorders. Such a word would be expected to occur in category 04 (cardiovascular disease) frequently, and not in the other categories.Such a term would, by our measure, have a low 'entropy'.A com~non English word like 'of', would be used in the descriptions of all kinds of disease, and would accordingly have a high 'entropy'. Tables 2 and 3 show the top and bottom of the list of all words occurring in two or more diseases sorted by this entropy measure.In these lists, as our hypothesis seems to imply, low 'entropy' corresponds to high 'specificity', and high 'entropy' to low 'specificity'. This separation of medical terms from common English words, by algorithmic means, is facilitated by the context supplied by the notion of 'disease category', and the fact that this was represented in the CMIT text.Since the previously described procedure had given us a means of selecting medical terms from common English words, it was possible to produce lists of 'pure' medical terms.We then wrote a program which formed all pairs of such terms (ignoring order).We defined an 'association measure' (A) which measured the difference between the observed co-occurrences of term-pairs (they could co-occur in any location in the definition and in either order), and the co-occurrences expected from chance alone. Tables 4 and 5 show the top and bottom of a list of all pairs formed from the low entropy terms in the previous experiment.The first 1120 terms were chosen, that is, those having an entropy of 2.0 napiers cr less.The pair list was then sorted by this association measure, A.Word pairs which are found to be highly associated, appear to do so for two reasons.The test, which is trivial, is that some word pairs are semantically one word despite their being lexically, two. Comon examples would be 'white House' and 'Hong Kong'; medical examples are 'vital capacity', 'axis deviation', and 'slit lamp'.These could have been avoided algorithmically by not taking adjacent words in forming the termpairs, without any significant overall effect. The second reasons for high frequency word co-occurrence is that both words are causally related through underlying physiological mechanisms.It is these which had the greatest interest for us, and the measure A, may be viewed as a measure of the non-independence of the symptoms or signs themselves.The term pairs which are negatively associated, have this property for the same reason.If the two terms are used typically in the descriptions of different diseases, they are less likely to co-occur than by chance.(In a baseball story on the sports page, we would not find 'pass', 'punt', or 'tackle').These negatively associated pairs may have value in diagnostic programs for the recognition of two or more diseases in a given patient, a problem not satisfactorily dealt with by even the most sophisticated of current programs.Finally, an extension of the entropy concept permits one to generate (algorithmically) the vocabularies used by the medical specialties (which correspond to the disease categories represented in CMIT. This is done by assigning terms which occur predominantly in one category to a single vocabulary and then sorting by entropy. Tables 6 and 7 show the vocabularies used in dermatology and gastroenterology (as derived from CMIT).These vocabularies, it will be noted, can be used as 'hit lists' for the purpose of recognizing the content of medical texts.In su~nary, we see the ability to differentiate medical terms from common words by context, and the ability to relate the medical words by meaning, as two of the first steps toward text processing algorithms that preserve and can manipulate the semantic content of words in medical texts. .LO ,16 .~)9 .09 ,o~ ,t~9 .O9 .ng ,09 .lu to ZOU. .03 (IIU) Bo,i-vlniriculat .lZ (381) .UJ (9() bone-v4~inil .12 (SH|) .05 (15t*) bone-(c;.L2 (36|) .HZ 646one-ceivtx
Appendix:
| null | null | null | null | {
"paperhash": [],
"title": [],
"abstract": [],
"authors": [],
"arxiv_id": [],
"s2_corpus_id": [],
"intents": [],
"isInfluential": []
} | null | 536 | 0.005597 | null | null | null | null | null | null | null | null |
02bf83a75fc28d591392b7d0af831acb44393387 | 208978352 | null | Parsing | ~tts 02139 [.ooking at the Proceedings of last year's Annual Meeting, one sccs that the session most closely parallcling this one was entitled Language Structure and Par~ing. [n avcry nice prescnu~fion, Martin Kay was able to unite the papers of that scssion uudcr a single theme. As hc stated it. | {
"name": [
"Martin, W. A."
],
"affiliation": [
null
]
} | null | null | 18th Annual Meeting of the Association for Computational Linguistics | 1980-06-01 | null | null | null | "Illcre has been a shift of emphasis away from highly ~tmctured systems of complex rules as the principal repository of infi~mmtion about the syntax of a language towards a view in which the responsibility is distributed among the Icxicoo. semantic parts of the linguistic description, aod a cognitive or strategic component. Concomitantly, interest has shiRed from al!lorithms for syntactic analysis and generation, in which the central stnlctorc and the exact seqtlencc of events are paramount, to systems iu which a heavier burden is carried by the data stl ucture and in wilich the order of,:vents is a m,.~ter of strategy.['his ycar. the papers of the session represent a greater diversity of rescan:h directions. The paper by Hayes. and thc paper by Wilcnsky and Aren~ arc both examples of what Kay had in mind. but the paper I)y Church, with rcgard to the question of algorithms, is quite the opposite. He {tolds that once the full range uf constraints dcscribing pc~plc's processing behavior has been captul'ed, the best parsing strategies will be rather straightforwarcL and easily cxplaincd as algorithms.Perilaps the seven papers in this year's session can best be introduecd by briefly citing ~mc of the achievcmcqts and problems reported in the works they refcrence,In thc late i960"s Woods tweeds70] capped an cfTort by several people to dcvch)p NI'N parsing. 'lllis well known technique applies a smdghtforward top down, left CO right` dcpth fic~t pat.~ing algorithm to a syntactic grammar. I-:~pccialiy in the compiled fi)rm produced by Ilorton [Bnrton76~, ] . the parser was able to produce the first parse in good time. but without ~mantic constraints, numcroos syn~ictic analyses could be and ~,mctimcs were fou.nd, especially in scntenccs with conjunctions. A strength of the system was the ATN grammar, which can be dc~ribcd as a sct of context frec production rules whose right hand sides arc finite statc machincs and who.~ U'ansition arcs have bccn augmented with functions able to read and set registers, and also able to block a transition on their an:. Many people have found this a convenient fonnulism in which m develop grammars of Engtish. The Woods ATN parser was a great success and attempts were made to exploit it (a) as a modc[ of human processing and (b) as a tool for writing grammars. At the same time it was recognized to havc limimdoos. It wasn't tolerant of errors, and it couldn't handle unknown words or constructions (there were n~'tny syntactic constmcdons which it didn't know). In addidon, the question answering system fed by the parser had a weak notion of word and phrase .~emantics and it was not always able to handle quantificrs properly. It is not ctcar thcs¢ components could have supported a stronger interaction with syntactic parsing, had Woods chosen to a~cmpt it.On the success side. Kaplan [Kaplan72] was inspired to claim that the ATN parser provided a good model tbr some aspects of human processing. Some aspects which might bc modeled are: by ordering the arcs leaving the state where the head noun of'an NP has been ~'ccpccd: a Ix)p am (tcrminuting the NP) is tried before an an: accepting a modifying relative clause. ]-h)wcver, Ricil [Rich75] puims out that dfis an: ordering solution would seem to have diltlculdcs with 2). This sentence is often nut peracived2) They told the girl that Bill liked that he would be at the loath;all game.as requiring backup, yet if the arcs an: ordered as for I), it does require backup. There is no doubt that whatever is going on. the awareness of backup in 3) is so much stronger than in 2) that it seems like a different phenomcnoo. To resolve this,3)The horse raced past the b,'u'n fell.one could claim that perceived backup is some fimction of' the length of the actual b~kup, or maybe of the degree of commiunent to the original path (althoogh it isnt clear what this would mean in ATN terms).In this session. Ferrari and Stock will turn the are ordering game around and describe, for actual tex~ the probability that a given arc is the correct exit an: from a node. given the an: by wiuch the parser arrived at the node. [t will be intcr~ting to look at their distributions. [n the speech project at IBM War, sou Laboratories [Baker75] it was discovered some time ago that, for a given text, the syntactic class era word could be predicted correctly over 90% of the umo given only the syntactic class of the preceding word` Interestingly, the correctness of' predictions fell off less than 10% whcn only the current word w~ used. One wonders if this same level of skewncss holds across texts, or (what we will hear) for the continuation of phrases. These results should be helpful in discussing the whole issue of arc orderiog" Implicit in any al~ ordering strategy is the assumption that not all parses of a sentence will be fi)und. Having the "best" path, the parscr will stop wben it gets an acceptable analysis. Arc ordering helps find that "best' path. Marcus [Man:us7g] , agreed with the idea of following only a best path, but he claimed that the reason there is no pe~eived backup in 2) is that the human parser is able to look ahead a few constituents iostead of just one s~ate and one eoilstitucnt in making a u'ansition. He claims this makes a more accurate model of human garden path behavior, but it doesn't address the issue of unlimited stuck depth. Here, Church will describe a parser similar in design co Marcus', except that it conserves memory. This allows Church to address psychological facLS not addrc~qed by either Marcus or the ATN models. Church claims that exploiting stack size constraints will incn:ase the cimnces of building a good best path parser.Besides psychological modeling, thcre is also an interest m using thc ATN ft)nnalism for writing and teaching grammars. Paramount here is e:;planation, both of the grammar and its appiicatinn to a particu!ar sentence. The papcr by Kchler and Woods reports on this. Weischcdcl picks a particular problem, responding to an input which the ATN can't handle. He a~,'xiatcs a list of diagnostic couditions and actions with each state. When no pur.xc is found, the parser finds tile last store on the path which progressed the thnhcst d)rongh the input string and executes its diagnostic conditions and actions. When a parser uses ,rely syutactic constraints" one cxpects it to find a lut of parses. UsuuJly the number of parses grows marc than tincarly with sentence length. Thus, for a ~tirly COmlflete grammar and moderate to king sentences, one would expect that the cast of no parses (handled by Wei.%hedcl) would be rare in comparison with the oilier two cases (not handled) where file set of parses doesn't include the correct one, or where the grammar has been mistakenly, written to allow undesired pa!~s" Success of the above eflol'ts to folinw only the best path would clearly be relevant here. No doubt Wcischcdel's proeedure can help find a lot of bugs if die t~t examples are chosen with a little care. Ihtt there is sdll interesting work to be done on grammar and parser explanation, and Weisehcdcl is onc of those who intends to explore itThe remaining three papers stem from three separate traditions which reject the strict syntactic ATN formalism, each for its own reasons. They are: Each of these systems claims some advantage over the more widely known and accepted ATN.The somandc grammar parser can be viewed as a variation of the ATN which attempts to cope with the ATN's lack of semantics. Kapian's work builds on work stancd by Burton [Burton76b] and picked up by Hcndrix et al [ltendrtx78J. The semantic grammar parser uses semanuc in.;tcad of syntactic arc categories. "l'his collapses syntax and semantics into a single structure. When an ATN parsing strategy is used the result is actuall7 ~ flexible than a syntactic ATN, but it is faster because syntactic possibilities are elin'*in;tted by the semantics of the domain. "Ilm strategy is justified m terms of the pcrfum'*ancc of actual running systems. Kaplan also calls on a speed criteria in suggest,og (hat when an unkuown word is cncountcred the system assomc all possibilities which will let parsing prncccd. Theo if more than one possibility leads to a successful parse, the system should attempt to rt,~olve the word fi.trthcr by file search or user query.As Kaplan points nut. d)is trick is not limited to semantic grammars, but only to systems having enough constraints. It would hc interesting to know hOW w(:. it woutd work for systems using Oshcrson's [Oshcrson78] prcdicahility criterion. instead of troth for their scmanocs. Oshcrson distinguishes between "green idea", which he says is silly and "marricd bachelor" which he say~ is just raise. Hc ilotes that "idea is oat green" is no better, but "bac[~ehlr is not married" is fine. Prcdicability is a looser constrain* than Kaplan uses, aud if it would still be cuough to limit database search this wo. "l bc intcrcv;ng, because prcdicability is easier to implement across a broad domain.Wilen~ky is a former stu,:tent of Schank's and thus COlt'*us ffom a tradition which emphastzes sentatmcs over syutax. accounts" by lcxical relatkms between constituents (if a phrase, for many of the phenomena explained by the old transfomtational grammar. }:or example. givenThere were reported to have been lions sighted.a typical ATN parser would attempt by register manipulations to make "lions" the suhject. Using a phrase approach, "there be lions sighted" can be taken as meaning "exist lions sighted." wl)erc "lions" is an object and "sighted" an object complement "There" is related to the "be" m "been" by a series of relationships between the argumentS of semantic structures. Wilensky appears to have suppressed syntax into his semantic component, and so it will be inrct~ting to sec how he handles the traditional syntactic phenomcna of 4), like passive and verb forms.Finalb, the paper by Hayes shows the influence of the speech recognition projects where bad input gave the Woods A'rN great dimcnlty. Text input is much better than speech input. However, examination of actual input [Malhotra75] does show sentences like:What would have profits have been?Fortunately, these cases are rare. Much more likely is clipsis and the omission of syntax when the semantics are clear. For example, the missing commas inGive ratios of manufacturing costs to sales for plants 1 2 3 and 4 for 72 and 73.Examples like these show that errors and omissions are not random phenomena and that there can be something to the study of errors and how to deal with diem.In summary, it can be seen ~at while much progress has been made in consmtcting u~bic parsers, the basic i~ues, such as the division of syntax. semantics" and pragmatics both in representation and in urdcr uf processing, are still up for grabs. 'l'be problem has plenty of structure, so there is good fun to be had. | null | null | null | null | Main paper:
:
"Illcre has been a shift of emphasis away from highly ~tmctured systems of complex rules as the principal repository of infi~mmtion about the syntax of a language towards a view in which the responsibility is distributed among the Icxicoo. semantic parts of the linguistic description, aod a cognitive or strategic component. Concomitantly, interest has shiRed from al!lorithms for syntactic analysis and generation, in which the central stnlctorc and the exact seqtlencc of events are paramount, to systems iu which a heavier burden is carried by the data stl ucture and in wilich the order of,:vents is a m,.~ter of strategy.['his ycar. the papers of the session represent a greater diversity of rescan:h directions. The paper by Hayes. and thc paper by Wilcnsky and Aren~ arc both examples of what Kay had in mind. but the paper I)y Church, with rcgard to the question of algorithms, is quite the opposite. He {tolds that once the full range uf constraints dcscribing pc~plc's processing behavior has been captul'ed, the best parsing strategies will be rather straightforwarcL and easily cxplaincd as algorithms.Perilaps the seven papers in this year's session can best be introduecd by briefly citing ~mc of the achievcmcqts and problems reported in the works they refcrence,In thc late i960"s Woods tweeds70] capped an cfTort by several people to dcvch)p NI'N parsing. 'lllis well known technique applies a smdghtforward top down, left CO right` dcpth fic~t pat.~ing algorithm to a syntactic grammar. I-:~pccialiy in the compiled fi)rm produced by Ilorton [Bnrton76~, ] . the parser was able to produce the first parse in good time. but without ~mantic constraints, numcroos syn~ictic analyses could be and ~,mctimcs were fou.nd, especially in scntenccs with conjunctions. A strength of the system was the ATN grammar, which can be dc~ribcd as a sct of context frec production rules whose right hand sides arc finite statc machincs and who.~ U'ansition arcs have bccn augmented with functions able to read and set registers, and also able to block a transition on their an:. Many people have found this a convenient fonnulism in which m develop grammars of Engtish. The Woods ATN parser was a great success and attempts were made to exploit it (a) as a modc[ of human processing and (b) as a tool for writing grammars. At the same time it was recognized to havc limimdoos. It wasn't tolerant of errors, and it couldn't handle unknown words or constructions (there were n~'tny syntactic constmcdons which it didn't know). In addidon, the question answering system fed by the parser had a weak notion of word and phrase .~emantics and it was not always able to handle quantificrs properly. It is not ctcar thcs¢ components could have supported a stronger interaction with syntactic parsing, had Woods chosen to a~cmpt it.On the success side. Kaplan [Kaplan72] was inspired to claim that the ATN parser provided a good model tbr some aspects of human processing. Some aspects which might bc modeled are: by ordering the arcs leaving the state where the head noun of'an NP has been ~'ccpccd: a Ix)p am (tcrminuting the NP) is tried before an an: accepting a modifying relative clause. ]-h)wcver, Ricil [Rich75] puims out that dfis an: ordering solution would seem to have diltlculdcs with 2). This sentence is often nut peracived2) They told the girl that Bill liked that he would be at the loath;all game.as requiring backup, yet if the arcs an: ordered as for I), it does require backup. There is no doubt that whatever is going on. the awareness of backup in 3) is so much stronger than in 2) that it seems like a different phenomcnoo. To resolve this,3)The horse raced past the b,'u'n fell.one could claim that perceived backup is some fimction of' the length of the actual b~kup, or maybe of the degree of commiunent to the original path (althoogh it isnt clear what this would mean in ATN terms).In this session. Ferrari and Stock will turn the are ordering game around and describe, for actual tex~ the probability that a given arc is the correct exit an: from a node. given the an: by wiuch the parser arrived at the node. [t will be intcr~ting to look at their distributions. [n the speech project at IBM War, sou Laboratories [Baker75] it was discovered some time ago that, for a given text, the syntactic class era word could be predicted correctly over 90% of the umo given only the syntactic class of the preceding word` Interestingly, the correctness of' predictions fell off less than 10% whcn only the current word w~ used. One wonders if this same level of skewncss holds across texts, or (what we will hear) for the continuation of phrases. These results should be helpful in discussing the whole issue of arc orderiog" Implicit in any al~ ordering strategy is the assumption that not all parses of a sentence will be fi)und. Having the "best" path, the parscr will stop wben it gets an acceptable analysis. Arc ordering helps find that "best' path. Marcus [Man:us7g] , agreed with the idea of following only a best path, but he claimed that the reason there is no pe~eived backup in 2) is that the human parser is able to look ahead a few constituents iostead of just one s~ate and one eoilstitucnt in making a u'ansition. He claims this makes a more accurate model of human garden path behavior, but it doesn't address the issue of unlimited stuck depth. Here, Church will describe a parser similar in design co Marcus', except that it conserves memory. This allows Church to address psychological facLS not addrc~qed by either Marcus or the ATN models. Church claims that exploiting stack size constraints will incn:ase the cimnces of building a good best path parser.Besides psychological modeling, thcre is also an interest m using thc ATN ft)nnalism for writing and teaching grammars. Paramount here is e:;planation, both of the grammar and its appiicatinn to a particu!ar sentence. The papcr by Kchler and Woods reports on this. Weischcdcl picks a particular problem, responding to an input which the ATN can't handle. He a~,'xiatcs a list of diagnostic couditions and actions with each state. When no pur.xc is found, the parser finds tile last store on the path which progressed the thnhcst d)rongh the input string and executes its diagnostic conditions and actions. When a parser uses ,rely syutactic constraints" one cxpects it to find a lut of parses. UsuuJly the number of parses grows marc than tincarly with sentence length. Thus, for a ~tirly COmlflete grammar and moderate to king sentences, one would expect that the cast of no parses (handled by Wei.%hedcl) would be rare in comparison with the oilier two cases (not handled) where file set of parses doesn't include the correct one, or where the grammar has been mistakenly, written to allow undesired pa!~s" Success of the above eflol'ts to folinw only the best path would clearly be relevant here. No doubt Wcischcdel's proeedure can help find a lot of bugs if die t~t examples are chosen with a little care. Ihtt there is sdll interesting work to be done on grammar and parser explanation, and Weisehcdcl is onc of those who intends to explore itThe remaining three papers stem from three separate traditions which reject the strict syntactic ATN formalism, each for its own reasons. They are: Each of these systems claims some advantage over the more widely known and accepted ATN.The somandc grammar parser can be viewed as a variation of the ATN which attempts to cope with the ATN's lack of semantics. Kapian's work builds on work stancd by Burton [Burton76b] and picked up by Hcndrix et al [ltendrtx78J. The semantic grammar parser uses semanuc in.;tcad of syntactic arc categories. "l'his collapses syntax and semantics into a single structure. When an ATN parsing strategy is used the result is actuall7 ~ flexible than a syntactic ATN, but it is faster because syntactic possibilities are elin'*in;tted by the semantics of the domain. "Ilm strategy is justified m terms of the pcrfum'*ancc of actual running systems. Kaplan also calls on a speed criteria in suggest,og (hat when an unkuown word is cncountcred the system assomc all possibilities which will let parsing prncccd. Theo if more than one possibility leads to a successful parse, the system should attempt to rt,~olve the word fi.trthcr by file search or user query.As Kaplan points nut. d)is trick is not limited to semantic grammars, but only to systems having enough constraints. It would hc interesting to know hOW w(:. it woutd work for systems using Oshcrson's [Oshcrson78] prcdicahility criterion. instead of troth for their scmanocs. Oshcrson distinguishes between "green idea", which he says is silly and "marricd bachelor" which he say~ is just raise. Hc ilotes that "idea is oat green" is no better, but "bac[~ehlr is not married" is fine. Prcdicability is a looser constrain* than Kaplan uses, aud if it would still be cuough to limit database search this wo. "l bc intcrcv;ng, because prcdicability is easier to implement across a broad domain.Wilen~ky is a former stu,:tent of Schank's and thus COlt'*us ffom a tradition which emphastzes sentatmcs over syutax. accounts" by lcxical relatkms between constituents (if a phrase, for many of the phenomena explained by the old transfomtational grammar. }:or example. givenThere were reported to have been lions sighted.a typical ATN parser would attempt by register manipulations to make "lions" the suhject. Using a phrase approach, "there be lions sighted" can be taken as meaning "exist lions sighted." wl)erc "lions" is an object and "sighted" an object complement "There" is related to the "be" m "been" by a series of relationships between the argumentS of semantic structures. Wilensky appears to have suppressed syntax into his semantic component, and so it will be inrct~ting to sec how he handles the traditional syntactic phenomcna of 4), like passive and verb forms.Finalb, the paper by Hayes shows the influence of the speech recognition projects where bad input gave the Woods A'rN great dimcnlty. Text input is much better than speech input. However, examination of actual input [Malhotra75] does show sentences like:What would have profits have been?Fortunately, these cases are rare. Much more likely is clipsis and the omission of syntax when the semantics are clear. For example, the missing commas inGive ratios of manufacturing costs to sales for plants 1 2 3 and 4 for 72 and 73.Examples like these show that errors and omissions are not random phenomena and that there can be something to the study of errors and how to deal with diem.In summary, it can be seen ~at while much progress has been made in consmtcting u~bic parsers, the basic i~ues, such as the division of syntax. semantics" and pragmatics both in representation and in urdcr uf processing, are still up for grabs. 'l'be problem has plenty of structure, so there is good fun to be had.
Appendix:
| null | null | null | null | {
"paperhash": [],
"title": [],
"abstract": [],
"authors": [],
"arxiv_id": [],
"s2_corpus_id": [],
"intents": [],
"isInfluential": []
} | null | 536 | null | null | null | null | null | null | null | null | null |
b4751ee1443735c9b76f1dfd56af80b9f5a4035f | 12282464 | null | Metaphor - A Key to Extensible Semantic Analysis | Interpreting metaphors is an integral and inescapable process in human understanding of natural language. This paper discusses a method of analyzing metaphors based on the existence of a small number of generalized metaphor mappings. Each generalized metaphor contains a recognition network, a basic mapping, additional transfer mappings, and an implicit intention component. It is argued that the method reduces metaphor interpretation from a reconstruction to a recognition task. Implications towards automating certain aspects of language learning are also discussed, t An Opening Argument A dream of many computational linguists is to produce a natural language analyzer that tries its best to process language that "almost but not quite" corresponds to the system's grammar, dictionary and semantic knowledge base. In addition, some of us envision a language analyzer that improves its performance with experience. To these ends, I developed the proiect and integrate algorithm, a method of inducing possible meanings of unknown words from context and storing the new information for eventual addition to the dictionary [1]. While useful, this mechanism addresses only one aspect of the larger problem, accruing certain classes of word definitions in the dictionary. In this paper, I focus on the problem of augmenting the power of a semantic knowledge base used for language analysis by means of metaphorical mappings. | {
"name": [
"Carbonell, Jaime G."
],
"affiliation": [
null
]
} | null | null | 18th Annual Meeting of the Association for Computational Linguistics | 1980-06-01 | 11 | 47 | null | The pervasiveness of metaphor in every aspect of human communication has been convincingly demonstrated by Lakoff and Johnson [4}, Ortony [6] , Hobbs [3] and marly others.However, the creation of a process model to encompass metaphor comprehension has not been of central concern? From a computational standpoint, metaphor has been viewed as an obstacle, to be tolerated at best and ignored at worst. For instance, Wilks [9] gives a few rules on how to relax semantic constraints in order for a parser to process a sentence in spite of the metaphorical 1This research was sponsored in part by the Defense Advanced Research Prelects Agency (DOD). Order No. 3597, monitored by the Air Force Avionics Laboratory under Contract F33615-78-C-155t. The views and conclusions contained in this document are those of the author, and should not be interpreted as rel3resenting the official policies, either expressed or implied, of the Defense Advanced Research Projects Agency or the U.S. Government. 2Hobbs has made an initial stab at this problem, although h=s central concern appears to be ~n characterizing and recognizing metaphors in commonly-encountered utterances.usage of a particular word. I submit that it is insufficient merely to tolerate a metaphor.Understanding the metaphors used in language often proves to be a crucial process in establishing complete and accurate interpretations of linguistic utterances.There appear to be a small number of general metaphors (on the order of fifty) that pervade commonly spoken English. Many of these were identified and exemplified by Lakoff and Johnson [4] . For instance: more-is-up. less.is.down and the conduit metaphor -Ideas are objects, words are containers, communication consists of putting objects (ideas) into containers (words), sending the containers along a conduit (a communications medium. such as speech, telephone lines, newspapers, letters), whereupon the recipient at the other end of the conduit unpackages the objects from their containers (extracts the ideas from the words). Both of these metaphors apply in the examples discussed below.The computational significance of the existence of a small set of general metaphors underlies the reasons for my current investigation: The problem of understanding a large class of metaphors may be reduced from a reconstruction to a recognition task.That is, the identification of a metaphorical usage as an instance of one of the general metaphorical mappings is a much more tractable process than reconstructing the conceptual framework from the bottom up each time a new metaphor-instance is encountered. Each of the general metaphors contains not only mappings of the form: "X is used to mean Y in context Z", but inference rules to enrich the understanding process by taking advantage of the reasons why the writer may have chosen the particular metaphor (rather than a different metaphor or a literal rendition). | null | null | of Metaphors t propose to represent each general metaphor in the following manner:A Recoanition Network contains the information necessary to decide whether or not a linguistic utterance is an instantiation of the general metaphor. On the first-pass implementation I will use a simple discrimination network. writer chooses one these metaphors, as a function of the ideas he wants to convey to the reader. If the understander is to reconstruct those ideas, he ought to know why the particular metaphor was ChOSen. This information is precisely that which the metaphor conveys that is absent from a literal expression of the same concept. (E.g.. "John is completely crazy about Mary" vs. "John loves mary very much". The former implies that John may exhibit impulsive or uncharacteristic behavior, and that his present state of mind may be less permanent than in the latter case. Such information ought to be stored with the love-is-madness metaphor unless the understanding system is sufficiently sophisticated to make these inferences by other means.)• A Transfer Maooino, analogous to Winston's Transfer Frames [10] , is a filter that determines which additional Darts of the literal input may be mapDed onto the conceptual representation, and establishes exactly the transformation that this additional information must undergo. Hence, in "Prices are soaring", we need to use the basic maDDing of the more-is.up metaphor to understand that prices are increasing, and we must use the transfer map of the same metaphor to interpret "soar" ( = rising high and fast) as large increases that are happening fast.For this metaphor, altitude descriptors map into corresponding Quantit~ descriptors and rate descriptors remain unchanged. This information is part of the transfer maDDing. In general, the default assumption is that all descriptors remain unchanged unless specified otherwise -hence, the frame problem {5] is circumvented.into the Process Model The information encoded in the general metaphors must be brought to bear in the understanding process. Here, 1 outli,'q the most direct way to extract maximal utility from the general.metaphor information.Perhaps a more subtle process that integrates metaphor information more closely w h other conceptual knowledge iS required. An attempt to implement this method in the near future will serve as a pragmatic measure of its soundness.The general process for applying metaphor-mapping knowledge is the following:1. Attempt to analyze the input utterance in a literal, conventional fashion. If this fails, and the failure is caused by a semantic cese-constraint violation, go to the next step. (Otherwise, the failure is probably not due to the presence of a metaphor.)2. Apply the recognition networks of the generalized metaphors. If on e succeeds, then retrieve all the information stored with that metaphorical maDDing and go on to the next step. (Otherwise, we have an unknown metaphor or a different failure in the originai semantic interpretation. Store this case for future evaluation by the system builder.)3. Use the basic maDDing to establish the semantic framework of the input utterance.Use the transfer maDDing to fill the slots of the meaning framework with the entities in the input, transforming them as specified in the transfer map. If any inconsistenc=es arise in the meaning framework, either the wrong metaphor was chosen, or there is a second metaphor in the input (or the input is meaningless).Integrate into the semantic framework any additional information found in the implicit-intention component that does not contradict existing information.metaphor within the scope of the present dialog (or text). It is likely that the same metaphor will be used again with the same transfer mappings present but with additional information conveyed. (Often one participant in a dialog "picks up" the metaphors used by by the other participant. Moreover, some metaphors can serve to structure an entire conversation.)and Packaging Metaphors We have seen how the recognition of basic general metaphors greatly structures and facilitates the understanding process. However, there are many problems in understanding metaphors and analogies that we have not yet addressed. For instance, we have said little about explicit analogies found in text. I believe the computational process used in understanding analogies to be the same as that used in understanding metaphors, The difference is one of recognition and universality of acceptance in the underlying mappings. That is, an analogy makes the basic mapping explicit (sometimes the additional transfer maps are also detailed), whereas in a metaphor the mapping must be recognized (or reconstructed) by the understander. However, the general metaphor mappings are already known to the understander -he need only recognize them and instantiate them. Analogical mappings are usually new mappings, not necessarily known to the understander. Therefore, such mappings must be spelled out (in establishing the analogy) before they can be used. If a maDDing is often used as an analogy it may become an accepted metaphor; the explanatory recluirement is Suppressed if the speaker believes his listener has become familiar with the maDDing.This suggests one method of learning new metaphors. A maDDing abstracted from the interpretation of several analogies can become packaged into a metaphor definition. The corTesDonding subparts of the analogy will form the transfer map, if they are consistent across the various analogy instances. The recognition network can be formed by noting the specific semantic features whose presence was required each time the analogy was stated and those that were necessarily refered to after the statement of the analogy. The most difficult Dart to learn is the intentional component. The understander would need to know or have inferred the writer's intentions at the time he expressed the analogy.Two other issues we have not yet addressed are: Not all metaphors are instantiations of a small set of generalized metaphor mappings. Many metaphors appear to become frozen in the language, either packaged into phrases with fixed meaning (e.g., "prices are going through the roof", an instance of the more-is-up metaphor), or more specialized entities than the generalized mappings, but not as specific as fixed phrases. I set the former issue aside remarkino that if a small set of general constructs can account for the bulk of a complex phenomenon, then they merit an in-depth investigation. Other metaphors may simpty be less-often encountered mappings. The latter issue, however, requires further discussion. I propose that typical instantiations of generalized metaphors be recognized and remembered as part of the metaphor interpretation process. These instantiations will serve to grow a hierarchy of often.encountered metaphorical mappings from the top down. That is, typical specializations of generalized metaphors are stored in a specialization hierarchy (similar to a semantic network, with ISA inheritance pointers to the generalized concept of which they are specializations). These typical instanceS can in turn spawn more specific instantiations (if encountered with sufficient frequency in the language analysis), and the process can continue until until the fixed-phrase level is reached. Clearly. growing all possible specializations of a generalized maDDing is prohibitive in space, and the vast majority of the specializations thus generated would never be encountered in processing language. The sparseness of typical instantiations is the key to saving space. Only those instantiations of more general me. ~ohors that are repeatedly encountered are assimilated into t, Je hieraruhy. Moreover, the number or frequency of reclui=ed instances before assimilation takes place is a parameter that can be set according to the requirements of the system builder (or user). In this fashion, commonly-encountered metaphors will be recognized and understood much faster than more obscure instantiations of the general metaphors.It is important to note that creating new instantiations of more general mappings is a much simpler process than generalizing existing concepts. Therefore, this type of specialization-based learning ought to be Quite tractable with current technology.Brought to Light Let us see how to apply the metaphor interpretation method to some newspaper headlines that rely on complex metaphors. Consider the following example from the New York Times:Speculators brace for a crash in the soaring gold market.Can gold soar? Can a market soar? Certainly not by any literal interpretation. A language interpreter could initiate a complex heuristic search (or simply an exhaustive search) to determine the most likely ways that "soaring" could modify gold or gold markets. For instance, one can conceive of a spreading.activation search starting from the semantic network nodes for "gold market" and "soar" (assuming such nodes exist in the memory) to determine the minimal.path intersections, much like Quillian originally proposed {7]. However, this mindless intersection search is not only extremely inefficient, but will invariably yield wrong answers. (E.g., a golcl market ISA market, and a market can sell fireworks that soar through the sky -to suggest a totally spurious connection.) A system absolutely requires knowledge of the mappings in the more-is.ul~ metaphor to establish the appropriate and only the appropriate connection.In comparison, consider an application of the general mechanism described in the previous section to the "soaring gold market" example. Upon realizing that a literaJ interpretation fails, the system can take the most salient semantic features of "soaring" and "gold markets" and apply them to the recognition networks of the generaJ metaphors.Thus, "upward movement" from soaring matches "up" in the more-is.up metaphor, while "increase in value or volume" of "gold markets" matches the "more" side of the metaphor. The recognition of our example as an instance of the general more-is-up metaphor establishes its basic meaning. It is crucial to note that without knowledge that the concept up (or ascents) may map to more (or increases), there appears to be no general tractable mechanism for semantic interpretation of our example.The transfer map embellishes the original semantic framework of a gold market whose value is increasing. Namely, "soaring" establishes that the increase is rapid and not firmly supported. (A soaring object may come tumbling down -> rapid increases in value may be followed by equally rapid decreases). Some inferences that are true of things that soar can also transfer: If a soaring object tumbles it may undergo a significant negative state change -> the gold market (and those who ride it) may suffer significant neaative state chan.qes. However, physical states map onto financial states.The less-is-down half of the metaphor is, of course, also useful in this example, as we saw in the preceding discussion. Moreover. this half of the metaphor is crucial to understand the phrase "bracing for a crash". This phrase must pass through the transfer map to make sense in the financial gold market world. In fact. it passes through very easily. Recalling that physical states map to financial states, "bracing" maps from "preparing for an expected sudden physical state change" to "preparing for a sudden financial state change". "Crash" refers directly to the cause of the negative physical state change, and it is mapped onto an analogous cause of the financial state change.More-is-up. less-is-down is such a ubiquitous metaphor that there are probably no specific intentions conveyed by the writer in his choice of the metaphor (unlike the love-is-madness metaphor).The instantiation of this metaphor should be remembered in interpreting subsequent text. For instance, had our example continued:Analysts expect gold prices to hit bottom soon, but investors may be in for a harrowing roller-coaster ride.We would have needed the context of: "uP means increaSes in the gold market, and clown means decreases in the same market, which can severely affect investors" before we could hope to understand the "roller-coaster ride" as "unpredictable increases and decreases suffered by speculators and investors".Press Censorship is a barrier to free communication.I have used this example before to illustrate the difficulty in interpreting the meaning of the word "barrier". A barrier is a physical object that disenables physical motion through its Location (e.g., "The fallen tree is a barrier to traffic"). Previously I proposed a semantic relaxation method to understand an "information transfer" barrier. However, there is a more elegant solution based on the conduit metaphor. The press is a conduit for communication. (Ideas have been packaged into words in newspaper articles and must now be distributed along the mass media conduit.) A barrier can be interpreted as a physical blockage of this conduit thereby disenabling the dissemination of information as packaged ideas, The benefits of applying the conduit metaphor is that only the original "physical object" meaning of barrier is required by the understanding system. In addition, the retention of the basic meaning of barrier (rather than some vague abstraction thereof) enables a language understander to interpret sentences like "The censorship barriers were lifted by the new regime." Had we relaxed the requirement that a barrier be a physical object, it would be difficult to interpret what it means to "lift" an abstract disenablement entity. On the other hand, the lifting of a physical object implies that its function as a disenabler of physical transfer no longer applies; therefore, the conduit is again open, a~nd free communication can proceed.In both our examples the interpretation of a metaphor to understand one sentence helped considerably in unaerstanding a subsequent sentence that retered to the metaphorical mapping established earlier. Hence, the significance of metaphor interpretation for understanding coherent text or dialog can hardly be overestimated, Metaphors often span several sentences and may structure the entire text around a particular metaphorical mapping (or a more explicit analogy) that helps convey the writer's central theme or idea. A future area of investigation for this writer will focus on the use of metaphors and analogy to root new ideas on old concepts and thereby convey them in a more natural and comprehensible manner. If metaphors and analogies help humans understand new concepts by relating them to existing knowledge, perhaps metaphors and analogies should also be instrumental in computer models that strive to interpret new conceptual information.Up The ideas described in this paper have not yet been implemented in a functioning computer system. I hope to start incorpor,3ting them into the POLITICS parser [2] , which is modelled after Riesbeck's rule.based ELI [8] .The philosophy underlying this work is that Computational Linguistics and Artificial Intelligence can take full advantage of -not merely tolerate or circumvent -metaphors used extensively in natural language, in case the reader is still in doubt about the necessity to analyze metaphor as an integral Dart of any comprehensive natural language system, I point out that that there are over 100 metaphors in the above text, not counting the examples. To illustrate further the ubiquity of metaphor and the difficulty we sometimes have in realizing its presence, I note that each section header and the title of this PaDer contain undeniable metaphors.8. | null | Main paper:
steps towards codifying knowledge:
of Metaphors t propose to represent each general metaphor in the following manner:A Recoanition Network contains the information necessary to decide whether or not a linguistic utterance is an instantiation of the general metaphor. On the first-pass implementation I will use a simple discrimination network. writer chooses one these metaphors, as a function of the ideas he wants to convey to the reader. If the understander is to reconstruct those ideas, he ought to know why the particular metaphor was ChOSen. This information is precisely that which the metaphor conveys that is absent from a literal expression of the same concept. (E.g.. "John is completely crazy about Mary" vs. "John loves mary very much". The former implies that John may exhibit impulsive or uncharacteristic behavior, and that his present state of mind may be less permanent than in the latter case. Such information ought to be stored with the love-is-madness metaphor unless the understanding system is sufficiently sophisticated to make these inferences by other means.)• A Transfer Maooino, analogous to Winston's Transfer Frames [10] , is a filter that determines which additional Darts of the literal input may be mapDed onto the conceptual representation, and establishes exactly the transformation that this additional information must undergo. Hence, in "Prices are soaring", we need to use the basic maDDing of the more-is.up metaphor to understand that prices are increasing, and we must use the transfer map of the same metaphor to interpret "soar" ( = rising high and fast) as large increases that are happening fast.For this metaphor, altitude descriptors map into corresponding Quantit~ descriptors and rate descriptors remain unchanged. This information is part of the transfer maDDing. In general, the default assumption is that all descriptors remain unchanged unless specified otherwise -hence, the frame problem {5] is circumvented.
a glimpse:
into the Process Model The information encoded in the general metaphors must be brought to bear in the understanding process. Here, 1 outli,'q the most direct way to extract maximal utility from the general.metaphor information.Perhaps a more subtle process that integrates metaphor information more closely w h other conceptual knowledge iS required. An attempt to implement this method in the near future will serve as a pragmatic measure of its soundness.The general process for applying metaphor-mapping knowledge is the following:1. Attempt to analyze the input utterance in a literal, conventional fashion. If this fails, and the failure is caused by a semantic cese-constraint violation, go to the next step. (Otherwise, the failure is probably not due to the presence of a metaphor.)2. Apply the recognition networks of the generalized metaphors. If on e succeeds, then retrieve all the information stored with that metaphorical maDDing and go on to the next step. (Otherwise, we have an unknown metaphor or a different failure in the originai semantic interpretation. Store this case for future evaluation by the system builder.)3. Use the basic maDDing to establish the semantic framework of the input utterance.Use the transfer maDDing to fill the slots of the meaning framework with the entities in the input, transforming them as specified in the transfer map. If any inconsistenc=es arise in the meaning framework, either the wrong metaphor was chosen, or there is a second metaphor in the input (or the input is meaningless).Integrate into the semantic framework any additional information found in the implicit-intention component that does not contradict existing information.
two examples:
Brought to Light Let us see how to apply the metaphor interpretation method to some newspaper headlines that rely on complex metaphors. Consider the following example from the New York Times:Speculators brace for a crash in the soaring gold market.Can gold soar? Can a market soar? Certainly not by any literal interpretation. A language interpreter could initiate a complex heuristic search (or simply an exhaustive search) to determine the most likely ways that "soaring" could modify gold or gold markets. For instance, one can conceive of a spreading.activation search starting from the semantic network nodes for "gold market" and "soar" (assuming such nodes exist in the memory) to determine the minimal.path intersections, much like Quillian originally proposed {7]. However, this mindless intersection search is not only extremely inefficient, but will invariably yield wrong answers. (E.g., a golcl market ISA market, and a market can sell fireworks that soar through the sky -to suggest a totally spurious connection.) A system absolutely requires knowledge of the mappings in the more-is.ul~ metaphor to establish the appropriate and only the appropriate connection.In comparison, consider an application of the general mechanism described in the previous section to the "soaring gold market" example. Upon realizing that a literaJ interpretation fails, the system can take the most salient semantic features of "soaring" and "gold markets" and apply them to the recognition networks of the generaJ metaphors.Thus, "upward movement" from soaring matches "up" in the more-is.up metaphor, while "increase in value or volume" of "gold markets" matches the "more" side of the metaphor. The recognition of our example as an instance of the general more-is-up metaphor establishes its basic meaning. It is crucial to note that without knowledge that the concept up (or ascents) may map to more (or increases), there appears to be no general tractable mechanism for semantic interpretation of our example.The transfer map embellishes the original semantic framework of a gold market whose value is increasing. Namely, "soaring" establishes that the increase is rapid and not firmly supported. (A soaring object may come tumbling down -> rapid increases in value may be followed by equally rapid decreases). Some inferences that are true of things that soar can also transfer: If a soaring object tumbles it may undergo a significant negative state change -> the gold market (and those who ride it) may suffer significant neaative state chan.qes. However, physical states map onto financial states.The less-is-down half of the metaphor is, of course, also useful in this example, as we saw in the preceding discussion. Moreover. this half of the metaphor is crucial to understand the phrase "bracing for a crash". This phrase must pass through the transfer map to make sense in the financial gold market world. In fact. it passes through very easily. Recalling that physical states map to financial states, "bracing" maps from "preparing for an expected sudden physical state change" to "preparing for a sudden financial state change". "Crash" refers directly to the cause of the negative physical state change, and it is mapped onto an analogous cause of the financial state change.More-is-up. less-is-down is such a ubiquitous metaphor that there are probably no specific intentions conveyed by the writer in his choice of the metaphor (unlike the love-is-madness metaphor).The instantiation of this metaphor should be remembered in interpreting subsequent text. For instance, had our example continued:Analysts expect gold prices to hit bottom soon, but investors may be in for a harrowing roller-coaster ride.We would have needed the context of: "uP means increaSes in the gold market, and clown means decreases in the same market, which can severely affect investors" before we could hope to understand the "roller-coaster ride" as "unpredictable increases and decreases suffered by speculators and investors".Press Censorship is a barrier to free communication.I have used this example before to illustrate the difficulty in interpreting the meaning of the word "barrier". A barrier is a physical object that disenables physical motion through its Location (e.g., "The fallen tree is a barrier to traffic"). Previously I proposed a semantic relaxation method to understand an "information transfer" barrier. However, there is a more elegant solution based on the conduit metaphor. The press is a conduit for communication. (Ideas have been packaged into words in newspaper articles and must now be distributed along the mass media conduit.) A barrier can be interpreted as a physical blockage of this conduit thereby disenabling the dissemination of information as packaged ideas, The benefits of applying the conduit metaphor is that only the original "physical object" meaning of barrier is required by the understanding system. In addition, the retention of the basic meaning of barrier (rather than some vague abstraction thereof) enables a language understander to interpret sentences like "The censorship barriers were lifted by the new regime." Had we relaxed the requirement that a barrier be a physical object, it would be difficult to interpret what it means to "lift" an abstract disenablement entity. On the other hand, the lifting of a physical object implies that its function as a disenabler of physical transfer no longer applies; therefore, the conduit is again open, a~nd free communication can proceed.In both our examples the interpretation of a metaphor to understand one sentence helped considerably in unaerstanding a subsequent sentence that retered to the metaphorical mapping established earlier. Hence, the significance of metaphor interpretation for understanding coherent text or dialog can hardly be overestimated, Metaphors often span several sentences and may structure the entire text around a particular metaphorical mapping (or a more explicit analogy) that helps convey the writer's central theme or idea. A future area of investigation for this writer will focus on the use of metaphors and analogy to root new ideas on old concepts and thereby convey them in a more natural and comprehensible manner. If metaphors and analogies help humans understand new concepts by relating them to existing knowledge, perhaps metaphors and analogies should also be instrumental in computer models that strive to interpret new conceptual information.
remember this instantiation of the general:
metaphor within the scope of the present dialog (or text). It is likely that the same metaphor will be used again with the same transfer mappings present but with additional information conveyed. (Often one participant in a dialog "picks up" the metaphors used by by the other participant. Moreover, some metaphors can serve to structure an entire conversation.)and Packaging Metaphors We have seen how the recognition of basic general metaphors greatly structures and facilitates the understanding process. However, there are many problems in understanding metaphors and analogies that we have not yet addressed. For instance, we have said little about explicit analogies found in text. I believe the computational process used in understanding analogies to be the same as that used in understanding metaphors, The difference is one of recognition and universality of acceptance in the underlying mappings. That is, an analogy makes the basic mapping explicit (sometimes the additional transfer maps are also detailed), whereas in a metaphor the mapping must be recognized (or reconstructed) by the understander. However, the general metaphor mappings are already known to the understander -he need only recognize them and instantiate them. Analogical mappings are usually new mappings, not necessarily known to the understander. Therefore, such mappings must be spelled out (in establishing the analogy) before they can be used. If a maDDing is often used as an analogy it may become an accepted metaphor; the explanatory recluirement is Suppressed if the speaker believes his listener has become familiar with the maDDing.This suggests one method of learning new metaphors. A maDDing abstracted from the interpretation of several analogies can become packaged into a metaphor definition. The corTesDonding subparts of the analogy will form the transfer map, if they are consistent across the various analogy instances. The recognition network can be formed by noting the specific semantic features whose presence was required each time the analogy was stated and those that were necessarily refered to after the statement of the analogy. The most difficult Dart to learn is the intentional component. The understander would need to know or have inferred the writer's intentions at the time he expressed the analogy.Two other issues we have not yet addressed are: Not all metaphors are instantiations of a small set of generalized metaphor mappings. Many metaphors appear to become frozen in the language, either packaged into phrases with fixed meaning (e.g., "prices are going through the roof", an instance of the more-is-up metaphor), or more specialized entities than the generalized mappings, but not as specific as fixed phrases. I set the former issue aside remarkino that if a small set of general constructs can account for the bulk of a complex phenomenon, then they merit an in-depth investigation. Other metaphors may simpty be less-often encountered mappings. The latter issue, however, requires further discussion. I propose that typical instantiations of generalized metaphors be recognized and remembered as part of the metaphor interpretation process. These instantiations will serve to grow a hierarchy of often.encountered metaphorical mappings from the top down. That is, typical specializations of generalized metaphors are stored in a specialization hierarchy (similar to a semantic network, with ISA inheritance pointers to the generalized concept of which they are specializations). These typical instanceS can in turn spawn more specific instantiations (if encountered with sufficient frequency in the language analysis), and the process can continue until until the fixed-phrase level is reached. Clearly. growing all possible specializations of a generalized maDDing is prohibitive in space, and the vast majority of the specializations thus generated would never be encountered in processing language. The sparseness of typical instantiations is the key to saving space. Only those instantiations of more general me. ~ohors that are repeatedly encountered are assimilated into t, Je hieraruhy. Moreover, the number or frequency of reclui=ed instances before assimilation takes place is a parameter that can be set according to the requirements of the system builder (or user). In this fashion, commonly-encountered metaphors will be recognized and understood much faster than more obscure instantiations of the general metaphors.It is important to note that creating new instantiations of more general mappings is a much simpler process than generalizing existing concepts. Therefore, this type of specialization-based learning ought to be Quite tractable with current technology.
wrapping:
Up The ideas described in this paper have not yet been implemented in a functioning computer system. I hope to start incorpor,3ting them into the POLITICS parser [2] , which is modelled after Riesbeck's rule.based ELI [8] .The philosophy underlying this work is that Computational Linguistics and Artificial Intelligence can take full advantage of -not merely tolerate or circumvent -metaphors used extensively in natural language, in case the reader is still in doubt about the necessity to analyze metaphor as an integral Dart of any comprehensive natural language system, I point out that that there are over 100 metaphors in the above text, not counting the examples. To illustrate further the ubiquity of metaphor and the difficulty we sometimes have in realizing its presence, I note that each section header and the title of this PaDer contain undeniable metaphors.8.
:
The pervasiveness of metaphor in every aspect of human communication has been convincingly demonstrated by Lakoff and Johnson [4}, Ortony [6] , Hobbs [3] and marly others.However, the creation of a process model to encompass metaphor comprehension has not been of central concern? From a computational standpoint, metaphor has been viewed as an obstacle, to be tolerated at best and ignored at worst. For instance, Wilks [9] gives a few rules on how to relax semantic constraints in order for a parser to process a sentence in spite of the metaphorical 1This research was sponsored in part by the Defense Advanced Research Prelects Agency (DOD). Order No. 3597, monitored by the Air Force Avionics Laboratory under Contract F33615-78-C-155t. The views and conclusions contained in this document are those of the author, and should not be interpreted as rel3resenting the official policies, either expressed or implied, of the Defense Advanced Research Projects Agency or the U.S. Government. 2Hobbs has made an initial stab at this problem, although h=s central concern appears to be ~n characterizing and recognizing metaphors in commonly-encountered utterances.usage of a particular word. I submit that it is insufficient merely to tolerate a metaphor.Understanding the metaphors used in language often proves to be a crucial process in establishing complete and accurate interpretations of linguistic utterances.There appear to be a small number of general metaphors (on the order of fifty) that pervade commonly spoken English. Many of these were identified and exemplified by Lakoff and Johnson [4] . For instance: more-is-up. less.is.down and the conduit metaphor -Ideas are objects, words are containers, communication consists of putting objects (ideas) into containers (words), sending the containers along a conduit (a communications medium. such as speech, telephone lines, newspapers, letters), whereupon the recipient at the other end of the conduit unpackages the objects from their containers (extracts the ideas from the words). Both of these metaphors apply in the examples discussed below.The computational significance of the existence of a small set of general metaphors underlies the reasons for my current investigation: The problem of understanding a large class of metaphors may be reduced from a reconstruction to a recognition task.That is, the identification of a metaphorical usage as an instance of one of the general metaphorical mappings is a much more tractable process than reconstructing the conceptual framework from the bottom up each time a new metaphor-instance is encountered. Each of the general metaphors contains not only mappings of the form: "X is used to mean Y in context Z", but inference rules to enrich the understanding process by taking advantage of the reasons why the writer may have chosen the particular metaphor (rather than a different metaphor or a literal rendition).
Appendix:
| null | null | null | null | {
"paperhash": [
"lawler|metaphors_we_live_by",
"hobbs|metaphor,_metaphor_schemata,_and_selective_inferencing",
"carbonell|towards_a_self-extending_parser",
"wilks|knowledge_structures_and_language_boundaries",
"riesbeck|comprehension_by_computer_:_expectation-based_analysis_of_sentences_in_context"
],
"title": [
"Metaphors We Live by",
"Metaphor, Metaphor Schemata, and Selective Inferencing",
"Towards a Self-Extending Parser",
"Knowledge Structures and Language Boundaries",
"Comprehension by computer : expectation-based analysis of sentences in context"
],
"abstract": [
"Every linguist dreams of the day when the intricate variety of human language will be a commonplace, widely understood in our own and other cultures; when we can unlock the secrets of human thought and communication; when people will stop asking us how many languages we speak. This day has not yet arrived; but the present book brings it somewhat closer. It is, to begin with, a very attractive book. The publishers deserve a vote of thanks for the care that is apparent in the physical layout, typography, binding, and especially the price. Such dedication to scholarly publication at prices which scholars can afford is meritorious indeed. We may hope that the commercial success of the book will stimulate them and others to similar efforts. It is also a very enjoyable and intellectually stimulating book which raises, and occasionally answers, a number of important linguistic questions. It is written in a direct and accessible style; while it introduces and uses a number of new terms, for the most part it is free of jargon. This is no doubt part of its appeal to nonlinguists, though linguists should also find it useful and provocative. It even has possibilities as a textbook. Lakoff and Johnson state their aims and claims forthrightly at the outset (p. 3):",
"Abstract : This paper demonstrates the importance of spatial and other metaphors. An approach to handling metaphors in a computational framework is described, based on the idea of selective inferencing. Three types of metaphor are examined in detail in this light: a simple metaphor, a spatial metaphor schema, and a novel metaphor. Finally, the author discusses the analogical processes that underlie the metaphor in this approach, and what the approach says about several classical questions about the metaphor.",
"This paper discusses an approach to incremental learning in natural language processing. The technique of projecting and integrating semantic constraints to learn word definitions is analyzed as implemented in the POLITICS system. Extensions and improvements of this technique are developed. The problem of generalizing existing word meanings and understanding metaphorical uses of words is addressed in terms of semantic constraint integration.",
"The paper discusses the incorporation of richer semantic structures into the Preference Semantics system: they are called pseudo-texts and capture something of the information expressed in one type of frame proposed by Minsky (q.v.). However, they are in a format, and subject to rules of inference, consistent with earlier accounts of this system of language analysis and understanding. Their use is discussed in connection with the phen omenon of extended use: sentences where the semantic preferences are broken. It is argued that such situations are the norm and not the exception in normal language use, and that a language under standing system must give some general treatment of them.",
"Abstract : ELI (English Language Interpreter) is a natural language parsing program currently used by several story understanding systems. ELI differs from most other parsers in that it: produces meaning representations (using Schank's Conceptual Dependency system) rather than syntactic structures; uses syntactic information only when the meaning can not be obtained directly; talks to other programs that make high level inferences that tie individual events into coherent episodes; uses context-based exceptions (conceptual and syntactic) to control its parsing routines. Examples of texts that ELI has understood, and details of how it works are given."
],
"authors": [
{
"name": [
"J. Lawler",
"G. Lakoff",
"Mark Johnson"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"Jerry R. Hobbs"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"J. Carbonell"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"Y. Wilks"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"C. Riesbeck",
"R. Schank"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
}
],
"arxiv_id": [
null,
null,
null,
null,
null
],
"s2_corpus_id": [
"1898149",
"59529723",
"16742497",
"2746718",
"60546035"
],
"intents": [
[
"background"
],
[],
[
"methodology"
],
[],
[]
],
"isInfluential": [
false,
false,
false,
false,
false
]
} | Problem: The paper addresses the problem of understanding metaphors in natural language processing and the challenge of incorporating metaphorical mappings into language analysis systems.
Solution: The paper proposes a method of reducing metaphor interpretation from a reconstruction task to a recognition task by utilizing a small number of generalized metaphor mappings, each containing a recognition network, basic mapping, transfer mappings, and an implicit intention component. | 536 | 0.087687 | null | null | null | null | null | null | null | null |
07f1bfff20cf521651fae721cd91791730edb6fd | 33790220 | null | Expanding the Horizons of Natural Language Interfaces | Current natural language interfaces have concentrated largely on determining the literal "meaning" of input from their users. While such decoding is an essential underpinning, much recent work suggests that natural language interlaces will never appear cooperative or graceful unless they also incorporate numerous non-literal aspects of communication, such as robust communication procedures. This toaper defends that view. but claims that direct imitation of human performance =s not the best way to =mplement many of these non-literal aspects of communication; that the new technology of powerful personal computers with integral graphics displays offers techniques superior to those of humans for these aspects, while still satistying human communication needs. The paper proposes interfaces based on a judicious mixture of these techniques and the still valuable methods of more traditional natural language interfaces. | {
"name": [
"Hayes, Phil"
],
"affiliation": [
null
]
} | null | null | 18th Annual Meeting of the Association for Computational Linguistics | 1980-06-01 | 29 | 11 | null | Most work so far on natural language communication between man and machine has dealt with its literal aspects. That is. natural language interlaces have implicitly adopted the position that their user's input encodes a request for intormation of; action, and that their job is tO decode the request, retrieve the information, or perform the action, and provide appropriate output back to the user. This is essentially what Thomas [24J cnlls the Encoding-Decoding model of conversation.While literal interpretation is a basic underpinning of communication, much recent work in artificial intelligence, linguistics, and related fields has shown that it is tar from the whole story in human communication. For example, appropriate interpretation of an utterance depends on assumptions about the speaker's intentions, and conversely, the sl.)eaker's goals influence what is said (Hobbs [13J, Thomas [24] ). People often make mistakes in speaking and listening, and so have evolvod conventions for affecting regalrs-(Schegloll et el. [20J) . There must also be a way of regulating the turns of participants in a conversation (Sacks et el. [10t) . This is just a sampling of what we will collectively call non literal ~lspects ol communication.The primary reason for using natural language in man-machine communication is to allow the user to express himsell mtturallyo and without hawng to learn a special language. However, it is becoming clear that providing for n,'ttural expression means dealing will1 tile non-literal well as the literal aspects ol communication; float the ability to interpret natural language literaUy does not in itself give a man-machine interlace the ability to communicate naturally. Some work on incorporating these non-literal aspects of communication into man-machine interfaces has already begun( [6, 8, 9, 15, 21, 25] ).The position I wish to stress in this paper is that natural language interfaces will never perform acceptably unless they deal with the non-literal as well as the literal aspects of communication: that without the non-literal aspects, they will always appear uncooperative, inflexible, unfriendly, and generally stupid to their users, leading to irritation, frustration, and an unwillingness to continue to be a user.This pos=tion is coming to be held fairly widely. However, I wish to go further and suggest that, in building non-literal aspects of communication into natural-language interfaces, we should aim for the most effective type of communication rather than insisting that the interface model human performance as exactly as possible. I believe that these two aims are not necessarily the same. especially given certain new technological trends (.lis(J ti ,'~s£~l below.Most attempts to incorporate non-literal aspects of communication into natural language interlaces have attempted to model human performance as closely as possible. The typical mode of communication in such an interface, in which system and user type alternately on a single scroll of pager (or scrolled display screen), has been used as an analogy to normal spoken human conversation in Wlllcll contmunicallon takes place over a similar half-duplex channel, i.e. a channel that only one party at a time can use witllout danger of confusion.Technology is outdating this model.Tl~e nascent generation of powerful personal computers (e.g. the ALTO ~23} or PERQ [18J) equipped with high-resolution bit-map graphics display screens and pointing devices allow the rapid display of large quantities of information and the maintenance of several independent communication channels for both output (division ol the screen into independent windows, highlighting, and other graphics techniques), and input (direction of keyboard input to different windows, poinling ,~put). I believe that this new technology can provide highly effective, natural language-based, communication between man and machine, but only il the half-duplex style of interaction described above is dropped. Rall~er than trying to imitate human convets~mon d=rectty, it will be more fruitful to use the capabilities of this new technology, whicl~ in some respects exceed those possessed by humans, to achieve the snme ends as the non-literal aspects of normal human conversation. Work by. for instance, Carey [31 and Hiltz 1121 shows how adaptable people aro to new communication situ~.~tlons, and there is every reason Io believe that people will adapt well to an interaction in which their communication ne~,ds are satisfied, even if they are satislied in a dilterent way than in ordinary human conversation.In the remainder of the paper I will sketch some human communication needs, and go on to suggest how they can be satisfied using the technology outlined above.In this section we will discuss four human communication needs and tile non-literal aspects of communication they have given rise to:• non-grammatical utterance recognition • contextually determined interpretation • robust communication procedures • channel sharingThe account here is based in part on work reported more fully in [8, 9] .Humans must deal with non-grammatical utterances in conversation simply because DePute produce them all the time. They arise from various sources: people may leave out or swallow words; they may start to say one thing, stop in the middle, and substitute something else; they may interrupt themselves to correct something they have just said; or they may simply make errors of tense, agreement, or vocabulary. For a combination of these and other reasons, it is very rare to see three consecutive grammatical sentences in ordinary conversation.Despite the ubiquity of ungrammaticality, it has received very little attention in the literature or from the implementers of natural-language interfaces. Exceptions include PARRY {17]. COOP [14] , and interfaces produced by the LIFER [11] system. Additional work on parsing ungrammatical input has been done by Weischedel and Black [25] , and Kwasny and Sandheimer [15] . AS part of a larger project on user interfaces [ 1 ] , we (Hayes and Mouradian [7] ) have also developed a parser capable of dealing flexibly with many forms of ungrammaticality.Perhaps part of the reason that flexibility in Darsmg has received so little attent*on in work on natural language interlaces is thai the input is typed, and so the parsers used have been derived from those used to parse written prose. Speech parsers (see for example I101 or 126i) have always been much more Ilexible. Prose is normally quite grammatical simply because the writer has had time to make it grammatical. The typed input to a computer system is. produced in "real time" and is therefore much more likely to contain errors or other ungrammaticalities.The listener al any given turn in a conversation does not merely decode or extract the inherent "meaning" from what the speaker said. Instead. lie =nterprets the speaker's utterance in the light at the total avnilable context (see for example. Hoblo~ [13] , Thomas [24J, or Wynn [27] ). In cooperative dialogues, and computer interfaces normally operate in a cooperative situation, this contextually determined interpretation allows the participants considerable economies in what they say, substituting pronouns or other anaphonc forms for more complete descriptions, not explicitly requesting actions or information that they really desire, omitting part=cipants from descriphons of events, and leaving unsaid other information that will be "obvious" to the listener because of the Context shared by speaker and listener. In less cooperative situations, the listener's interpretations may be other than the speaker intends, and speakers may compensate for such distortions in the way they construct their utterances.While these problems have been studied extensively in more abstract natural language research (for just a few examples see [4, 5, 16] ). little attention has been paid to them in more applied language wOrk. The work of Grosz [6J and Sidner [21] on focus of attention and its relation tO anaphora and ellipsis stand out here. along with work done in the COOP [14] system on checking the presuppositions of questions with 8 negative answer, in general, contextual interpretation covers most of the work in natural language proces~ng, and subsumes numerous currently intractable problems. It is only tractable in natural language interfaceS because at the tight constraints provided by the highly restricted worlds in which they operate.Just as in any other communication across a noisy channel, there is always a basic question in human conversstion of whether the listener has received the speaker's tltterance correctly. Humans have evolved robust communication conventions for performing such checks with considerable, though not complete, reliability, and for correcting errors when they Occur (see Schegloff {20i). Such conventions include: the speaker assuming an utterance has been heard correctly unless the reply contradicts this assumbtion or there is no reply at all: the speaker trying to correct his own errors himself: the listener incorporating h=s assumptions about a doubtful utterance into his reply; the listener asking explicitly for clarification when he is sufficiently unsure. AS noted earlier, computer interfaces have sidestepped this problem by making the interaction take place over a half-duplex channel somewhat analogous to the half-duplex channel inherent m sPeech, i.e. alternate turns at typing on a scroll el paper (or scrolled display screen). However, rather than prowding flexible conventions for changing turns, such =ntertaces typically brook no interrupt=arts while they are typing, and then when they are finished ins=st that the user type a complete input with no feedback (apart from character echoing), at which point the system then takes over the channel again.in the next Section we will examine how the new generation of interface technology can help with some of the problems we have raised.If computer interfaces are ever to become cooperative and natural to use, they must incorporate nonoiiteral aspects of communication. My mum point in this section is that there =s no reason they should incorporate them in a way directly im=tative of humans: so long as they are incorporated m a way that humans are comfortable with. direct imitation is not necessary, indeed, direct imitation iS unlikely to produce satislactory mterachon. Given the present state of natural language processing end artificial intelligence in general, there iS no prospect in the forseeable future that interlaces will be able to emulate human performance, since this depends so much on bringing to bear larger quantities of knowledge than current AI techmques are able to handle. Partial success in such emulation zs only likely to ra=se lalse expectations in the mind of the user, and when these expectations are inevitably crushed, frustration will result. However, I believe that by making use of some of the new technology ment=oned earlier, interfaces can provide very adequate substitutes for human techniques for non-literal aspects of commumcation; substitutes that capitalzze on capabilities of computers that are not possessed by humans, bul that nevertheless will result m interaction that feels very natural to a human.Before giving some examples, let tis review the kind of hardware I am assuming. The key item is a bit-map graphics display capable of being tilled with information very quickly. The screen con be divided into independent windows to which the system can direct difterent streams of OUtput independently. Windows can be moved around on the screen, overlapped, and PODDed out from under a pile of other windoWs. The user has a pointing device with which he can posit=on a cursor to arbitrary points on the SCreen, plus, of course, a traditional keyboard. Such hardware ex=sts now and will become increasingly available as powerful personal computers such as the PERO [18J or LISP machine [2] come onto the market and start to decrease in price. The examDlas of the use of such hardware which follow are drawn in part from our current experiments m user interface research {1. 7] on similar hardware.Perhaps the aspect of communication Ihal can receive the most benefit from this type of hardware is robust communication. Suppose the user types a non.grammatical input to the system which the system's flexible parser is able to recognize if. say, it inserts a word and makes a spelling correction. Going by human convention the system would either have to ask the user to confirm exDlicdly if its correction was correct, tO cleverly incorDoram ~tS assumption into its next output, or just tO aaaume the correction without comment. Our hypothetical system has another option: it Can alter what the user just typed (possibly highlighting the words that it changed). This achieves the same effect as the second optiert above, but subst=tutes a technological trick for huma intelligencf' Again. if the user names a person, say "Smith", in a context where the system knows about several Smiths with different first names, the human oot=ons are either to incorporate a list of the names into a sentence (which becomes unwmldy when there are many more than three alternatives) or to ask Ior the first name without giving alternatives. A third alternative, possible only in this new technology, is to set up 8 window on the screen with an initial piece of text followed by a list ol alternatives (twenty can be handled quite naturally this way). The user is then free to point at the alternative he intends, a much simpler and more natural alternative than typing the name. although there is no reason why this input mode should not be available as well in case the user prefers it.As mentioned in the previous section, contextually based interpretation is important in human conversation because at the economies of expression it allows. There is no need for such economy in an interface's output, but the human tendency to economy in this matter is somelhing that technology cannot change. The general problem of keeping track of focus of attention in a conversation is a dillicult one (see, for example, Grosz 161 and Sidner [221), but the type ol interface we are discussing can at least provide a helpful framework in which the current locus ol attention can be made explicit. Different loci at attention can be associated with different windows on tile screen, and the system can indicate what it thinks iS Ihe current lOCUS of .nttention by, say, making the border of the corresponding window dilferent from nil the rest. Suppose in the previous example IIlat at the time the system displays the alternative Smiths. the user decides that he needs some other information before he can make a selection. He might ask Ior this information in a typed request, at which point the system would set up a new window, make it the focused window, and display the requested information in it. At this point, the user could input requests to refine the new information, and any anaphora or ellipsis he used would be handled in the appropriate context.Representing.contexts explicitly with an indication of what the system thinks is the current one can also prevent confusion. The system should try to follow a user's shifts of focus automatically, as in the above example. However, we cannot expect a system of limited understanding always to track focus shifts correctly, and so it is necessary for the system to give explicit feedback on what it thinks the shift was. Naturally, this implies that the user should be able to change focus explicitly as well as implicitly (probably by pointing to the appropriate window).Explicit representation of loci can also be used to bolster a human's limited ability to keep track of several independent contexts. In the example above, it would not have been hard lot the user to remember why he asked for the additional information and to return and make the selection alter he had received that information. With many more than two contexts, however, people quickly lose track of where they are and what they are doing. Explicit representation of all the possibly active tasks or contexts can help a user keep things straight.All the examples of how sophisticated interface hardware can help provide non-literal aspects of communication have depended on the ability of the underlying system to produce pos~bly large volumes of output rapidly at arbitrary points on the screen. In effect, this allows the system multiple output channels independent of the user's typed input, which can still be echoed even while the system is producing other output, Potentially, this frees interaction over such an interface from any turn-taking discipline. In practice, some will probably be needed to avoid confusing the user with too many things going on at once, but it can probably be looser than that found in human conversations.As a final point, I should stress that natural language capability is still extremely valuable for such an interface. While pointing input is extremely fast and natural when the object or operation that the user wishes tO identify is on the screen, it obviously cannot be used when the information is not there. Hierarchical menu systems, in which the selection of one item in a menu results in the display of another more detailed menu, can deal with this problem to some extent, but the descriptive power and conceptual operators ol nalural language (or an artificial language with s=milar characteristics) provide greater flexit)ility and range of expression. II the range oI options =.~ larg~;, t)ul w,dl (tiscr,nm;de(I, il =s (llh.~l easier to specify a selection by description than by pointing, no matter how ctevedy tile options are organized. | null | null | null | In this paper, 1 have taken the position that natural language interfaces to computer systems will never be truly natural until they include non-literal as web as literal aspects of communication. Further, I claimed that in the light of the new technology of powerful personal computers with integral graphics displays, the best way to incorporate these non-literal aspects was nol to imitate human conversational patterns as closely as possible, but to use the technology in innovative ways to perform the same function as the non-literal aspects of communication found in human conversation.In any case, I believe the old-style natural language interfaces in which the user and system take turns to type on a single scroll of paper (or scrolled display screen) are doomed. The new technology can be used, in ways similar to those outlined above, to provide very convenient and attractive interfaces that do not deal with natural language. The advantages of this type ol interface will so dominate those associated with the old-style natural language interfaces that continued work in that area will become ol academic interest only.That is the challenge posed by the new technology for natural language interfaces, but it also holds a promise. The promise is that a combination of natural language techniques with the new technology will result in interfaces that will be truly natural, flexible, and graceful in their interaction. The multiple channels of information flow provided by the new technology can be used to circumvent many of the areas where it is very hard to give computers the intelligence and knowledge to perform as well as humans. In short, the way forward for natural language interfaces is not to strive for closer, but still highly imperfect, imitation of human behaviour, but tO combine the strengths of the new technology with the great human ability to adapt to communication environments which are novel but adequate for their needs. | Main paper:
introduction:
Most work so far on natural language communication between man and machine has dealt with its literal aspects. That is. natural language interlaces have implicitly adopted the position that their user's input encodes a request for intormation of; action, and that their job is tO decode the request, retrieve the information, or perform the action, and provide appropriate output back to the user. This is essentially what Thomas [24J cnlls the Encoding-Decoding model of conversation.While literal interpretation is a basic underpinning of communication, much recent work in artificial intelligence, linguistics, and related fields has shown that it is tar from the whole story in human communication. For example, appropriate interpretation of an utterance depends on assumptions about the speaker's intentions, and conversely, the sl.)eaker's goals influence what is said (Hobbs [13J, Thomas [24] ). People often make mistakes in speaking and listening, and so have evolvod conventions for affecting regalrs-(Schegloll et el. [20J) . There must also be a way of regulating the turns of participants in a conversation (Sacks et el. [10t) . This is just a sampling of what we will collectively call non literal ~lspects ol communication.The primary reason for using natural language in man-machine communication is to allow the user to express himsell mtturallyo and without hawng to learn a special language. However, it is becoming clear that providing for n,'ttural expression means dealing will1 tile non-literal well as the literal aspects ol communication; float the ability to interpret natural language literaUy does not in itself give a man-machine interlace the ability to communicate naturally. Some work on incorporating these non-literal aspects of communication into man-machine interfaces has already begun( [6, 8, 9, 15, 21, 25] ).The position I wish to stress in this paper is that natural language interfaces will never perform acceptably unless they deal with the non-literal as well as the literal aspects of communication: that without the non-literal aspects, they will always appear uncooperative, inflexible, unfriendly, and generally stupid to their users, leading to irritation, frustration, and an unwillingness to continue to be a user.This pos=tion is coming to be held fairly widely. However, I wish to go further and suggest that, in building non-literal aspects of communication into natural-language interfaces, we should aim for the most effective type of communication rather than insisting that the interface model human performance as exactly as possible. I believe that these two aims are not necessarily the same. especially given certain new technological trends (.lis(J ti ,'~s£~l below.Most attempts to incorporate non-literal aspects of communication into natural language interlaces have attempted to model human performance as closely as possible. The typical mode of communication in such an interface, in which system and user type alternately on a single scroll of pager (or scrolled display screen), has been used as an analogy to normal spoken human conversation in Wlllcll contmunicallon takes place over a similar half-duplex channel, i.e. a channel that only one party at a time can use witllout danger of confusion.Technology is outdating this model.Tl~e nascent generation of powerful personal computers (e.g. the ALTO ~23} or PERQ [18J) equipped with high-resolution bit-map graphics display screens and pointing devices allow the rapid display of large quantities of information and the maintenance of several independent communication channels for both output (division ol the screen into independent windows, highlighting, and other graphics techniques), and input (direction of keyboard input to different windows, poinling ,~put). I believe that this new technology can provide highly effective, natural language-based, communication between man and machine, but only il the half-duplex style of interaction described above is dropped. Rall~er than trying to imitate human convets~mon d=rectty, it will be more fruitful to use the capabilities of this new technology, whicl~ in some respects exceed those possessed by humans, to achieve the snme ends as the non-literal aspects of normal human conversation. Work by. for instance, Carey [31 and Hiltz 1121 shows how adaptable people aro to new communication situ~.~tlons, and there is every reason Io believe that people will adapt well to an interaction in which their communication ne~,ds are satisfied, even if they are satislied in a dilterent way than in ordinary human conversation.In the remainder of the paper I will sketch some human communication needs, and go on to suggest how they can be satisfied using the technology outlined above.
non-literal aspects of communication:
In this section we will discuss four human communication needs and tile non-literal aspects of communication they have given rise to:• non-grammatical utterance recognition • contextually determined interpretation • robust communication procedures • channel sharingThe account here is based in part on work reported more fully in [8, 9] .Humans must deal with non-grammatical utterances in conversation simply because DePute produce them all the time. They arise from various sources: people may leave out or swallow words; they may start to say one thing, stop in the middle, and substitute something else; they may interrupt themselves to correct something they have just said; or they may simply make errors of tense, agreement, or vocabulary. For a combination of these and other reasons, it is very rare to see three consecutive grammatical sentences in ordinary conversation.Despite the ubiquity of ungrammaticality, it has received very little attention in the literature or from the implementers of natural-language interfaces. Exceptions include PARRY {17]. COOP [14] , and interfaces produced by the LIFER [11] system. Additional work on parsing ungrammatical input has been done by Weischedel and Black [25] , and Kwasny and Sandheimer [15] . AS part of a larger project on user interfaces [ 1 ] , we (Hayes and Mouradian [7] ) have also developed a parser capable of dealing flexibly with many forms of ungrammaticality.Perhaps part of the reason that flexibility in Darsmg has received so little attent*on in work on natural language interlaces is thai the input is typed, and so the parsers used have been derived from those used to parse written prose. Speech parsers (see for example I101 or 126i) have always been much more Ilexible. Prose is normally quite grammatical simply because the writer has had time to make it grammatical. The typed input to a computer system is. produced in "real time" and is therefore much more likely to contain errors or other ungrammaticalities.The listener al any given turn in a conversation does not merely decode or extract the inherent "meaning" from what the speaker said. Instead. lie =nterprets the speaker's utterance in the light at the total avnilable context (see for example. Hoblo~ [13] , Thomas [24J, or Wynn [27] ). In cooperative dialogues, and computer interfaces normally operate in a cooperative situation, this contextually determined interpretation allows the participants considerable economies in what they say, substituting pronouns or other anaphonc forms for more complete descriptions, not explicitly requesting actions or information that they really desire, omitting part=cipants from descriphons of events, and leaving unsaid other information that will be "obvious" to the listener because of the Context shared by speaker and listener. In less cooperative situations, the listener's interpretations may be other than the speaker intends, and speakers may compensate for such distortions in the way they construct their utterances.While these problems have been studied extensively in more abstract natural language research (for just a few examples see [4, 5, 16] ). little attention has been paid to them in more applied language wOrk. The work of Grosz [6J and Sidner [21] on focus of attention and its relation tO anaphora and ellipsis stand out here. along with work done in the COOP [14] system on checking the presuppositions of questions with 8 negative answer, in general, contextual interpretation covers most of the work in natural language proces~ng, and subsumes numerous currently intractable problems. It is only tractable in natural language interfaceS because at the tight constraints provided by the highly restricted worlds in which they operate.Just as in any other communication across a noisy channel, there is always a basic question in human conversstion of whether the listener has received the speaker's tltterance correctly. Humans have evolved robust communication conventions for performing such checks with considerable, though not complete, reliability, and for correcting errors when they Occur (see Schegloff {20i). Such conventions include: the speaker assuming an utterance has been heard correctly unless the reply contradicts this assumbtion or there is no reply at all: the speaker trying to correct his own errors himself: the listener incorporating h=s assumptions about a doubtful utterance into his reply; the listener asking explicitly for clarification when he is sufficiently unsure. AS noted earlier, computer interfaces have sidestepped this problem by making the interaction take place over a half-duplex channel somewhat analogous to the half-duplex channel inherent m sPeech, i.e. alternate turns at typing on a scroll el paper (or scrolled display screen). However, rather than prowding flexible conventions for changing turns, such =ntertaces typically brook no interrupt=arts while they are typing, and then when they are finished ins=st that the user type a complete input with no feedback (apart from character echoing), at which point the system then takes over the channel again.in the next Section we will examine how the new generation of interface technology can help with some of the problems we have raised.
incorporating non-literal aspects of communication into user interfaces:
If computer interfaces are ever to become cooperative and natural to use, they must incorporate nonoiiteral aspects of communication. My mum point in this section is that there =s no reason they should incorporate them in a way directly im=tative of humans: so long as they are incorporated m a way that humans are comfortable with. direct imitation is not necessary, indeed, direct imitation iS unlikely to produce satislactory mterachon. Given the present state of natural language processing end artificial intelligence in general, there iS no prospect in the forseeable future that interlaces will be able to emulate human performance, since this depends so much on bringing to bear larger quantities of knowledge than current AI techmques are able to handle. Partial success in such emulation zs only likely to ra=se lalse expectations in the mind of the user, and when these expectations are inevitably crushed, frustration will result. However, I believe that by making use of some of the new technology ment=oned earlier, interfaces can provide very adequate substitutes for human techniques for non-literal aspects of commumcation; substitutes that capitalzze on capabilities of computers that are not possessed by humans, bul that nevertheless will result m interaction that feels very natural to a human.Before giving some examples, let tis review the kind of hardware I am assuming. The key item is a bit-map graphics display capable of being tilled with information very quickly. The screen con be divided into independent windows to which the system can direct difterent streams of OUtput independently. Windows can be moved around on the screen, overlapped, and PODDed out from under a pile of other windoWs. The user has a pointing device with which he can posit=on a cursor to arbitrary points on the SCreen, plus, of course, a traditional keyboard. Such hardware ex=sts now and will become increasingly available as powerful personal computers such as the PERO [18J or LISP machine [2] come onto the market and start to decrease in price. The examDlas of the use of such hardware which follow are drawn in part from our current experiments m user interface research {1. 7] on similar hardware.Perhaps the aspect of communication Ihal can receive the most benefit from this type of hardware is robust communication. Suppose the user types a non.grammatical input to the system which the system's flexible parser is able to recognize if. say, it inserts a word and makes a spelling correction. Going by human convention the system would either have to ask the user to confirm exDlicdly if its correction was correct, tO cleverly incorDoram ~tS assumption into its next output, or just tO aaaume the correction without comment. Our hypothetical system has another option: it Can alter what the user just typed (possibly highlighting the words that it changed). This achieves the same effect as the second optiert above, but subst=tutes a technological trick for huma intelligencf' Again. if the user names a person, say "Smith", in a context where the system knows about several Smiths with different first names, the human oot=ons are either to incorporate a list of the names into a sentence (which becomes unwmldy when there are many more than three alternatives) or to ask Ior the first name without giving alternatives. A third alternative, possible only in this new technology, is to set up 8 window on the screen with an initial piece of text followed by a list ol alternatives (twenty can be handled quite naturally this way). The user is then free to point at the alternative he intends, a much simpler and more natural alternative than typing the name. although there is no reason why this input mode should not be available as well in case the user prefers it.As mentioned in the previous section, contextually based interpretation is important in human conversation because at the economies of expression it allows. There is no need for such economy in an interface's output, but the human tendency to economy in this matter is somelhing that technology cannot change. The general problem of keeping track of focus of attention in a conversation is a dillicult one (see, for example, Grosz 161 and Sidner [221), but the type ol interface we are discussing can at least provide a helpful framework in which the current locus ol attention can be made explicit. Different loci at attention can be associated with different windows on tile screen, and the system can indicate what it thinks iS Ihe current lOCUS of .nttention by, say, making the border of the corresponding window dilferent from nil the rest. Suppose in the previous example IIlat at the time the system displays the alternative Smiths. the user decides that he needs some other information before he can make a selection. He might ask Ior this information in a typed request, at which point the system would set up a new window, make it the focused window, and display the requested information in it. At this point, the user could input requests to refine the new information, and any anaphora or ellipsis he used would be handled in the appropriate context.Representing.contexts explicitly with an indication of what the system thinks is the current one can also prevent confusion. The system should try to follow a user's shifts of focus automatically, as in the above example. However, we cannot expect a system of limited understanding always to track focus shifts correctly, and so it is necessary for the system to give explicit feedback on what it thinks the shift was. Naturally, this implies that the user should be able to change focus explicitly as well as implicitly (probably by pointing to the appropriate window).Explicit representation of loci can also be used to bolster a human's limited ability to keep track of several independent contexts. In the example above, it would not have been hard lot the user to remember why he asked for the additional information and to return and make the selection alter he had received that information. With many more than two contexts, however, people quickly lose track of where they are and what they are doing. Explicit representation of all the possibly active tasks or contexts can help a user keep things straight.All the examples of how sophisticated interface hardware can help provide non-literal aspects of communication have depended on the ability of the underlying system to produce pos~bly large volumes of output rapidly at arbitrary points on the screen. In effect, this allows the system multiple output channels independent of the user's typed input, which can still be echoed even while the system is producing other output, Potentially, this frees interaction over such an interface from any turn-taking discipline. In practice, some will probably be needed to avoid confusing the user with too many things going on at once, but it can probably be looser than that found in human conversations.As a final point, I should stress that natural language capability is still extremely valuable for such an interface. While pointing input is extremely fast and natural when the object or operation that the user wishes tO identify is on the screen, it obviously cannot be used when the information is not there. Hierarchical menu systems, in which the selection of one item in a menu results in the display of another more detailed menu, can deal with this problem to some extent, but the descriptive power and conceptual operators ol nalural language (or an artificial language with s=milar characteristics) provide greater flexit)ility and range of expression. II the range oI options =.~ larg~;, t)ul w,dl (tiscr,nm;de(I, il =s (llh.~l easier to specify a selection by description than by pointing, no matter how ctevedy tile options are organized.
conclusion:
In this paper, 1 have taken the position that natural language interfaces to computer systems will never be truly natural until they include non-literal as web as literal aspects of communication. Further, I claimed that in the light of the new technology of powerful personal computers with integral graphics displays, the best way to incorporate these non-literal aspects was nol to imitate human conversational patterns as closely as possible, but to use the technology in innovative ways to perform the same function as the non-literal aspects of communication found in human conversation.In any case, I believe the old-style natural language interfaces in which the user and system take turns to type on a single scroll of paper (or scrolled display screen) are doomed. The new technology can be used, in ways similar to those outlined above, to provide very convenient and attractive interfaces that do not deal with natural language. The advantages of this type ol interface will so dominate those associated with the old-style natural language interfaces that continued work in that area will become ol academic interest only.That is the challenge posed by the new technology for natural language interfaces, but it also holds a promise. The promise is that a combination of natural language techniques with the new technology will result in interfaces that will be truly natural, flexible, and graceful in their interaction. The multiple channels of information flow provided by the new technology can be used to circumvent many of the areas where it is very hard to give computers the intelligence and knowledge to perform as well as humans. In short, the way forward for natural language interfaces is not to strive for closer, but still highly imperfect, imitation of human behaviour, but tO combine the strengths of the new technology with the great human ability to adapt to communication environments which are novel but adequate for their needs.
Appendix:
| null | null | null | null | {
"paperhash": [
"hayes|flexible_parsing",
"hobbs|conversation_as_planned_behavior",
"hayes|graceful_interaction_in_man-machine_communication",
"kwasny|ungrammaticality_and_extra-grammaticality_in_natural_language_understanding_systems",
"sidner|towards_a_computational_theory_of_definite_anaphora_comprehension_in_english_discourse",
"sidner|a_progress_report_on_the_discourse_and_reference_components_of_pal",
"grosz|the_representation_and_use_of_focus_in_a_system_for_understanding_dialogs",
"herdrix|human_engineering_fcr_applied_natural_language_processing",
"schegloff|the_preference_for_self-correction_in_the_organization_of_repair_in_conversation",
"charniak|toward_a_model_of_children's_story_comprehension",
"cullingford|script_application:_computer_understanding_of_newspaper_stories.",
"hendrix|human_engineering_for_applied_natural_language_processing"
],
"title": [
"Flexible Parsing",
"Conversation as Planned Behavior",
"Graceful Interaction in Man-Machine Communication",
"Ungrammaticality and Extra-Grammaticality in Natural Language Understanding Systems",
"Towards a computational theory of definite anaphora comprehension in English discourse",
"A Progress Report on the Discourse and Reference Components of PAL",
"The Representation and Use of Focus in a System for Understanding Dialogs",
"Human engineering fcr applied natural language processing",
"The preference for self-correction in the organization of repair in conversation",
"Toward a model of children's story comprehension",
"Script application: computer understanding of newspaper stories.",
"Human Engineering for Applied Natural Language Processing"
],
"abstract": [
"When people use natural language in natural settings, they often use it ungrammatically, missing out or repeating words, breaking-off and restarting, speaking in fragments, etc., Their human listeners are usually able to cope with these deviations with little difficulty. If a computer system wishes to accept natural language input from its users on a routine basis, it must display a similar indifference. In this paper, we outline a set of parsing flexibilities that such a system should provide. We go on to describe FlexP. a bottom-up pattern-matching parser that we have designed and implemented to provide these flexibilities for restricted natural language input to a limited-domain computer system.",
"In this paper, planning models developed in artificial intelligence are applied to the kind of planning that must be carried out by participants in a conversation. A planning mechanism is defined, and a short fragment of a free-flowing videotaped conversation is described. The bulk of the paper is then devoted to an attempt to understand the conversation in terms of the planning mechanism. This microanalysis suggests ways in which the planning mechanism must be augmented, and reveals several important conversational phenomena that deserve further investigation.",
"Compared to humans, current natural language dialogue systems often behave in a rigid and fragile manner when their conversations deviate from a narrowly conceived mainstream, e.g. when faced with ungrammatical, unclear, or unrecognizable input, ambiguous descriptions, or requests for clarification of their own output. We believe that the time is now ripe to construct systems which can interact gracefully with their users when such contingencies arise. Graceful interaction is not a single skill, but a combination of several diverse abilities. We list these components, and describe one of them - the ability to communicate robustly. Detailed descriptions of all the components appear in [4], along with details of a system architecture for their integrated Implementation.",
"Among the components included in Natural Language Understanding (NLU) systems is a grammar which spec i f i es much o f the l i n g u i s t i c s t ruc tu re o f the ut terances tha t can be expected. However, i t is ce r ta in tha t inputs that are ill-formed with respect to the grammar will be received, both because people regularly form ungra=cmatical utterances and because there are a variety of forms that cannot be readily included in current grammatical models and are hence \"extra-grammatical\". These might be rejected, but as Wilks stresses, \"...understanding requires, at the very least, ... some attempt to interpret, rather than merely reject, what seem to be ill-formed utterances.\" [WIL76]",
"Abstract : This report investigates the process of focussing as a description and explanation of the comprehension of certain anaphoric expressions in English discourse. The investigation centers on the interpretation of definite anaphora, that is, on the personal pronouns, and noun phrases used with a definite article the, this, or that. Focussing is formalized as a process in which a speaker centers attention on a particular aspect of the discourse. An algorithmic description specifies what the speaker can focus on and how the speaker may change the focus of the discourse as the discourse unfolds. The algorithm allows for a simple focussing mechanism to be constructed: an element in focus, an ordered collection of alternate foci, and a stack of old foci. The data structure for the element in focus is a representation which encodes a limited set of associations between it and other elements from the discourse as well as from general knowledge. This report also establishes other constraints which are needed for the successful comprehension of anaphoric expressions. The focussing mechanism is designed to take advantage of syntactic and semantic information encoded as constraints on the choice of anaphora interpretation. These constraints are due to the work of language researchers; and the focussing mechanism provides a principled means for choosing when to apply the constraints in the comprehension process.",
"Abstract : This paper reports on research being conducted on a computer assistant, called PAL. PAL is being designed to arrange various kinds of events with concern for the who, what, when, where and why of that event. The goal for PAL is to permit a speaker to interact with it in English and to use extended discourse to state the speaker's requirements. The portion of the language system discussed in this report disambiguates references from discourse and interprets the purpose of sentences of the discourse. PAL uses the focus of discourse to direct its attention to a portion of the discourse and to the database to which the discourse refers. The focus makes it possible to disambiguate references with minimal search. Focus and a frames representation of the discourse make it possible to interpret discourse purposes. The focus and representation of the discourse are explained, and the computational components of PAL which implement reference disambiguation and discourse interpretation are presented in detail. (Author)",
"As a dialog progresses the objects and actions that are most relevant to the conversation, and hence in the focus of attention of the dialog participants, change. This paper describes a representation of focus for language understanding systems, emphasizing its use in understanding task-oriented dialogs. The representation highlights that part of the knowledge base relevant at a given point in a dialog. A model of the task is used both to structure the focus representation and to provide an index into potentially relevant concepts in the knowledge base The use of the focus representation to make retrieval of items from the knowledge base more efficient is described.",
"Human engineering features for enhancing the usability of practical natural language systems are described. Such features include spelling correction, processing of incomplete (elliptical) input?, of the underlying language definition through English queries, and their ability for casual users to extend the language accepted by the system through the use of synonyms and peraphrases. All of the features described are incorporated in LJFER, -\"applications-oriented system for creating natural language interfaces between computer programs and casual USERS LJFER's methods for the mroe complex human engineering features presented.",
"An \"organization of repair' operates in conversation, addressed to recurrent problems in speaking, hearing, and understanding. Several features of that organization are introduced to explicate the mechanism which produces a strong empirical skewing in which self-repair predominates over other-repair, and to show the operation of a preference for self-repair in the organization of repair. Several consequences of the preference for self-repair for conversational interaction are sketched.* 1. SELF- AND OTHER-CORRECTION. Among linguists and others who have at all concerned themselves with the phenomenon of'correction' (or, as we shall refer to it, 'repair'; cf. below, ?2.1), a distinction is commonly drawn between 'selfcorrection' and 'other-correction', i.e. correction by the speaker of that which is being corrected vs. correction by some 'other'.l Sociologists take an interest in such a distinction; its terms-'self' and 'other'-have long been understood as central to the study of social organization and social interaction.2 For our concerns in this paper, 'self' and 'other' are two classes of participants in interactive social",
"Massachusetts Institute of Technology. Dept. of Electrical Engineering. Thesis. 1972. Ph.D.",
"Abstract : The report describes a computer story understander which applies knowledge of the world to comprehend what it reads. The system, called SAM, reads newspaper articles from a variety of domains, then demonstrates its understanding by summarizing or paraphrasing the text, or answering questions about it. (Author)",
"Human engineering features for enhancing the usabil ity of practical natural language systems a l re described. Such features include spelling correction, processing of incomplete (ell ipt ic-~I) input?, jntfrrog-t ior of th p underlying language definition through English oueries, and ?r rbil.it y for casual users to extrnd the language accepted by the system through the-use of synonyms ana peraphrases. All of 1 h* features described are incorporated in LJFER,-\"n r ppl ieat ions-orj e nlf d system for 1 creating natural language j nterfaees between computer programs and casual USERS LJFER's methods for r<\"v] izir? the mroe complex human enginering features ? re presented. 1 INTRODUCTION This pape r depcribes aspect r of a n applieations-oriented system for creating natural langruage interfaces between computer software and Casual users. Like the underlying researen itself, the paper is focused on the human engineering involved in designing practical rnd comfortable interfaces. This focus has lead to the investigation of some generally neglected facets of language processing, including the processing of Ireomplfte inputs, the ability to resume parsing after recovering from spelling errors and the ability for naive users to input English stat.emert s at run time that, extend and person-lize the language accepted by the system. The implementation of these features in a convenient package and their integration with other human engineering features are discussed. There has been mounting evidence that the current state of the art in natural language processing, although still relatively primitive, is sufficient for dealing with some very real problems. For example, Brown and Burton (1975) have developed a usable system for computer assisted instruction, and a number of language systems have been developed for interfacing to data bases, including the REL system developed by Thompson and Thompson (1975), the LUNAR system of Woods et al. (1972), and the PLANES system ol Walt7 (1975). The SIGART newsletter for February, 1977, contains a collection cf 5? short overviews of research efforts in the general area of natural language interfaces. Tnere has rise been a growing demand for application systems. At SRi's Artificial Irtellugene Center alone, many programs are ripe for the addition of language capabilities, Including systems for data base accessing, industrial automation, automatic programming, deduct ior, and judgmental reasoning. The appeal cf these systems to builders ana users .-'like is greatly enhanced when they are able to accept natural language inputs. B. The LIFER SYSTEM To add …"
],
"authors": [
{
"name": [
"P. Hayes",
"G. Mouradian"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"Jerry R. Hobbs"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"P. Hayes",
"R. Reddy"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"S. Kwasny",
"N. Sondheimer"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"C. Sidner"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"C. Sidner"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"B. Grosz"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"Gary G. Herdrix"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"E. Schegloff",
"G. Jefferson",
"Harvey Sacks"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"Eugene Charniak"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"R. E. Cullingford"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"G. Hendrix"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
}
]
}
],
"arxiv_id": [
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null
],
"s2_corpus_id": [
"11007680",
"145566374",
"3237422",
"12695499",
"41092026",
"60695077",
"2484798",
"59814145",
"143617589",
"62620723",
"60708295",
"5436772"
],
"intents": [
[],
[],
[
"background"
],
[],
[],
[],
[
"background"
],
[],
[],
[],
[
"background"
],
[]
],
"isInfluential": [
false,
false,
false,
false,
false,
false,
false,
false,
false,
false,
false,
false
]
} | - Problem: Current natural language interfaces have focused primarily on literal interpretation of user input, neglecting non-literal aspects of communication such as robust communication procedures.
- Solution: The paper proposes that natural language interfaces should incorporate non-literal aspects of communication using the new technology of powerful personal computers with integral graphics displays, rather than directly imitating human performance, to achieve effective and natural communication between man and machine. | 536 | 0.020522 | null | null | null | null | null | null | null | null |
5a3a3483a2281c67eb33cbf98313046f030ac27c | 62715625 | null | Machine aids: a small user{'}s reaction | A technical survey of the kinds of equipment that may improve both the quality and quantity of a translator's work is followed by consideration of the new conceptions and attitudes imposed by such equipment. The importance of motivation and other psychological factors is discussed, and the need for greater attention by manufacturers to human factors is stressed. Finally, a plea is made for improved telecommunication facilities and media compatibility between systems of different makes. | {
"name": [
"Clark, Robert"
],
"affiliation": [
null
]
} | null | null | Translating and the Computer: Machine aids for translators | 1980-11-01 | 0 | 1 | null | MY REACTION TO word processors? All in good time. First I thought I would brighten your day with a cutting from The Guardian. (Overseas visitors may not have heard of the Guardian's reputation for classic misprints.) In the Guardian of 15 October 1980, page 15, an article by Hamish Macrae refers to the 'world processor'. This misprint will be seen by some as prophetic:...the world processor! A fascinating thought. But if we define a person's world as their life and work, and a processor as something which brings about changes, that misprint is not so far from the truth.Telling someone you are a translator is a guaranteed conversation-stopper-except in the unlikely event that the other person is a translator as well; that guarantees plenty of conversation, especially if, like me, you believe that translating can be done quicker and better with the help of some of the electronic aids around today with two-or even ten-fingers and a typewriter, and the person you are talking to disagrees.I am described as a 'small user'. I could hardly be smaller without disappearing altogether, since I operate as a freelance translator and have no employees. (My typists work on a casual basis.) But I could not produce the quantity (or, for that matter, the quality) of translation work that I do, without reliable, specially-adapted dictating machines and powerful, well-designed text processing equipment.A quick personal outline for those who do not know me: I translate into English from Swedish and German (mainly) and Dutch and French (occasionally). The work normally has the central theme of 'technology'. It is mainly in the form of technical instruction manuals, brochures and catalogues for industrial plant, electronic equipment, and so on. Most of the work comes from Sweden and Germany, some from other countries; some comes from direct clients, some from agencies. They all have high standards, and expect the same of the freelances who work for them. To keep my clients I have to produce work that meets their standards not only linguistically but also in its presentation. And I have to meet their delivery deadlines as well. Of course this can be done with a typewriter, but I maintain that it can be done far better with a word processor. Even so, there are disadvantages, and because they are not so evident, I shall devote at least as much time to those as to the advantages.I now have an annual output of well over half a million words, and today I plan to talk about the machine aids that have made this possible for me, and the techniques I use. Of course there are people who achieve such an output by conventional means, but I would certainly find it hard to do so, especially while maintaining a high standard. Translators are an individualistic breed, and I have found that personality counts for a great deal in translating. What is right for me may not be right for another translator.The basis of my relatively high output is dictating. I am convinced that dictating is the most efficient way of translating. It is a technique that many translators find hard to acquire, but those who master it seldom abandon it. I fail to understand how anyone can be a successful full-time freelance without learning this technique.The dictating equipment I use does away with the hand microphone which I used to find such a nuisance: I was for ever putting it down to use reference books and make notes, and then having to pick it up again before continuing. What I wanted was 'hands-free' dictating. When I first saw the machine I now use, what interested me about it was the desk-top microphone unit for use at meetings and so on. Although it had pushbuttons to control the machine, it also had a socket for a foot control. This was what I had been looking for for a long time. The final link was left to my ingenuity. I bought a set of headphones with a boom microphone-the sort used by aircraft pilots, air traffic controllers, telephone operators and so on-and slightly modified the dictating machine to accept it. The headphones also reduced the level of extraneous noise, making it easier to concentrate. The foot control covered all the functions of a hand control. This was my first specially-adapted machine aid, and I think its application in this form to translating is an innovation. I should be interested to know whether any other translator uses a 'hands-free' dictating method.Broadly speaking dictating is twice as fast as typing the work yourself, especially if you are not a touch-typist. But the product is a tape, and this has to be typed. Typists cost money and good ones (who can tell when to put 'affect' and when to put 'effect', for instance) are difficult to find. Another problem with dictating is that once you have committed yourself it is difficult to make changes on the tape, and impossible to make additions, as they would erase subsequent dictation. These problems can be overcome with a word processor, if it is properly used.What kinds of word-processing machine aid might a freelance translator consider?These are basically a conventional electric typewriter connected to a memory and storage unit which stores the text on a medium such as a magnetic card. Corrections can then be made and the corrected page reprinted. As each magnetic card stores one page of text, handling and storage of the cards becomes a major task if translations of any size are being handled. Paper consumption is as high as with conventional typing, since each corrected page has to be printed twice. The printing speed, though better than that of a fast typist, is relatively slow, and the machine cannot be used for other work while it is printing. I would not recommend a machine of this type for a translator.As their name implies, these machines have a screen which displays the text as it is typed, or as it is called up from the storage medium. They are of two main kinds: stand-alone machines and shared-logic machines. A stand-alone machine is completely self-contained, and is made up of keyboard, screen, processor, storage device, and printer. Shared logic systems have two or more work stations (ie screen and keyboard). A small user such as a freelance translator is likely to be interested mainly in standalone machines, though if throughput is consistently high there may be a case for having either a shared-logic system, or, in addition to the stand-alone machine, an input station (this is a word processor without a printer).Stand-alone machines may not be so well suited for large organizations. For such applications there are likely to be benefits in having a shared-logic (shared-facility) system with a large amount of on-line storage. This is outside the scope of this paper.The main advantages of working on a screen rather than on paper in a typewriter are:(a) Mistakes are easy to correct, and nothing ever has to be retyped. A common mistake in copy-typing is to jump from a word or phrase to the next occurrence of that word or phrase-perhaps several lines later. Of course this means retyping, but not on a word processor. The typist simply returns to the point where the passage was omitted, and types it in. The text on the screen opens up to make room for the addition. The speed and confidence of the typist are increased. (b) Another example: if the text is found to have been keyed in with the wrong margins, tab settings, pitch and so on, it is simply a matter of changing the relevant commands, and the text adjusts itself to the new layout. Similarly, an inexperienced operator who is unfamiliar with formatting (ie layout) instructions can be asked to type the text without them and they can be added later, perhaps during checking. (c) Repeated matter need only be typed once. Any piece of text, once typed and stored, can be re-used over and over again, not only at the time, but days, weeks, or even months later, and can of course be revised at will. For instance, consider a series of tables that recur on several pages, each with the same column headings and tab settings, but with different matter in the columns: on a typewriter each page would have to be typed as a separate entity, and if the work were interrupted and the typewriter were used for some other work in the meantime, the tabs might all have to be reset. With a good word processor, the complete table heading can be recalled as often as necessary, at any time, together with all the necessary tab settings. (d) A translator who deals with documents that follow a set pattern, such as data sheets, standards, specifications or annual reports, can translate the basic text (the text in which there are few or no changes from one document to the next) once only for repeated use, and need only translate the variable text.(e) Tabular and column work is easier, provided the operator is prepared to master the techniques.It is easy to describe these features, but they cannot be fully appreciated until they have been experienced first-hand. It is important when watching demonstrations of word processing equipment to be both critical and imaginative. The demonstrator is not a translator, and though you may show examples of work, and the salesman may say, 'No problem, our machine can do that standing on its head,' it is not the specimen of finished work that counts in that situation, but the process by which the result is achieved. The salesman sees only the sheet of paper you show him, and visualizes it as a copy-typing task, but the unique aspect of typing for a translator is the existence of the source text, and the fact that the typist has to combine audio-typing with scanning the source text for layout and so on. This is unlike most word processing applications.I am strongly in favour of a screen which shows as much as possible of a page at once. Salesmen whose machines show only half a page will tell you that tests have shown it is not necessary to see more at a time. I do not agree. It is useful to be able to display a good width as well, say landscape A4.More than any other factor, it was the versatility of the screen that sold me the machine I use.Most screen-based systems will allow you to work on to the screen while the printer is running, though there may be a few minutes' delay at the beginning of a long print job.I do not intend to deal with the financial aspects of acquiring and operating a word processor in this paper. Translators and their circumstances differ so greatly that it would be inappropriate. Nor shall I be talking of percentage increases in output or savings in typing time. I do not think in those terms. My advice would be to talk to users and make up your mind on the practical aspects first. Ask yourself whether you could adapt to working with a machine of this kind; many people have an irrational aversion to even the simplest technological devices; your typist may be one of them. You need to be enthusiastic and highly motivated to make a success of it, because it takes time to master the techniques, and it can be very frustrating at times. Only start thinking about the economics when you are quite sure you (and your typist if you use one) could get on with a word processor.(1) Do you type your own work or do you use one or more typists?A freelance translator considering word processing should look carefully at his current working methods and decide how and where a word processor would fit in. It will almost certainly lead to changes. I know two established translators who cannot understand why I use a word processor. They dictate their work, and send it out to typists who work at home.Their clients are prepared to accept typed work with handwritten corrections, and neither translator sees any reason to change the system. Many freelance translators use similar arrangements, and if you are one of these and you get a word processor you immediately face the following problems: -your typist has to come to the machine; -you can only use one typist at a time; -you cannot use the machine (for glossary work or for checking, for example) while the typist is working. You are likely to find yourself working on the machine when it is free, perhaps in the early morning, or at evenings and weekends. It is a matter of organization: if your output is 3000 words a day, your typist may take 4 hours to type this on the word processor, leaving you 4 hours of an 8-hour day to check and print the work. But to do this you need a good, committed typist, and the work must be straightforward. Complex layouts or tables disrupt this pattern. Even if you can persuade a typist to come to the machine instead of working at home, she will have to be trained, and it is far from certain she will take to it. In my experience, the rule seems to be: the more accustomed a typist is to a conventional typewriter, the less likely she is to adjust easily to a word processor.(2) What kind of work do you do?In theory anything that can be done on a typewriter can be done on a word processor, but I have yet to see a word processor that can cope well with formulas and equations, or even one that shows superscripts and subscripts properly on the screen.(Customers may indeed be 'prepared to accept typed work with handwritten corrections', but they might be delighted to accept work without a single correction, and to have the opportunity of getting a new printout without retyping if they wish either to alter the source text, so that the translation must be altered correspondingly, or to alter the translation, for reasons of their own. This must be a selling point, at least with customers who are aware of the possibilities of word processing. Most of my customers require a standard of presentation equal to or better than that produced within their company. It is a matter of professional pride with me to supply work to camera-copy standard at all times.Operator adjustment A word processor is not a typewriter, and ought never to be seen as one. It is a special kind of computer, a dedicated computer, in the jargon, and to use it successfully the operator must realize that he is not a typist but a programmer. When you type on a typewriter a character appears on the paper whenever you hit a key. That operation is a single event-a mechanical operation, not a program. When you type text on a word processor you are keying in a program, a series of instructions for the printer. Like any program, these instructions can be checked, modified, copied, manipulated-in a word, processed-before they finally reach the printer, and subsequently, of course.Because the word processor interprets its instructions literally, and is totally intolerant of human error or vagueness, the instructions you give it must be perfect. There is no room for error. On a typewriter you see your mistakes as soon as you make them; you can then correct the page or retype it. Not just typing errors, but layout mistakes as well. On a word processor some errors may not become apparent until you finally print out the work. One major cause of difficulty has to do with line spacing. Most screens show only single spacing in running text. Special commands are given to obtain one-and-a-half or double spacing, but the text may not show the different spacing on the screen, only on printout. It is surprising how frequently extra half line spaces are used in text. A related problem is that subscripts and superscripts may not appear as such on the screen, though again they can be obtained on the printer by entering special commands.When you put a blank sheet in a typewriter you immediately make a decision: where is the paper to be positioned along the platen? This will affect where the margins will be set. On a word processor, there may be technical reasons for not positioning the paper towards the left, as one normally does in a typewriter. It may have to be in the middle. This may have to be allowed for when keying in any formatting commands. You have to think in terms of 10, 12 or 15 pitch-the number of characters per inch, and work out the margin and tab settings in terms of numbers, whereas on a typewriter one generally sets them mechanically by eye. It helps to visualize a page as a grid on which each character position is defined vertically by a line number and horizontally by a character position number.To save a great deal of measurement and calculation, I have had transparent overlay grids drawn (a section of the 10-pitch one is reproduced on paper in Appendix 1) which can show the horizontal and vertical positions of any character on a page. This is particularly useful with translation work, since one often has to work to the layout of the original, leaving gaps for illustrations and so on.All these instructions must be given to the machine via the keyboard in the form of command codes, and it is this that can give rise to operator adjustment problems. For some typists it involves too much thought, or rather, a new way of thinking. It is unsafe to assume that a good conventional typist will be a good word processor operator. Quite a different mentality is required.You may wonder what can be so difficult about it. The difficulty lies not so much in the machine as in the common human aversion to the unfamiliar, the new, the different. People still struggle with the 24-hour clock, and it will be decades before metric weights and measures are fully accepted. Most of us still hanker secretly for good old pounds, shillings and pence: 'real money', we call it. The difficulty comes not in learning something completely new, but in having to unlearn a skill that has become part of you.When I translate, I adopt what I describe as the 'creative' approach. Many fellowtranslators disagree with me when I describe translating as a form of creative writing, but I maintain that when I translate, I am creating a new piece of English, on the basis of the meaning conveyed to me by the source text. I am trying-as best I can-to write what the author might have written if he had been English. Like all writers, I need to go over what I have written. This is not just a matter of 'polishing the style', though that certainly comes into it. There may be passages that were ambiguous in the original and that need to be edited in the light of new information; technical terms may need to be changed throughout the work, and so on.My aim in purchasing the text processor was to progress from the 'dictate-type-check/edit-retype' pattern to the 'dictate-type (on machine)check/edit (on screen)-print' pattern. See Appendixes 2 and 3.Appendix 4 shows in detail how a freelance translator might work with a word processor. WP in the right margin indicates use of the word processor.With a word processor it is an easy matter to compile lists of technical terms-glossaries-in two languages. To me this is one of the greatest benefits of the machine. Working for many different customers makes it difficult to keep their preferred terminology in one's head, and I make a practice of keeping glossaries for each client or, if the client is a translation agency, for each ultimate client.I am often asked whether the machine will arrange word-lists like this in alphabetical order. It doesn't, and I do not need it to. The glossary is built up entry by entry as the work progresses, and entries are simply added at the correct alphabetical position. The advantage of doing this on a word processor is that when a new entry is added, the existing text opens up to admit it.Usually the client receives a copy of the glossary with the work. This helps to keep terminology uniform throughout their publications. If I receive any more work from the same client, the glossary would be added to, and the updated version sent.When I was deciding which machine to buy, almost all my clients with word processing systems thought I should buy the make of word processor they had bought. I would have had to buy three or four machines to satisfy them all! I decided to buy the one I liked best. But my clients and I find the incompatibility between different systems very annoying. Surely word processor sales would increase if all systems used a common standardized medium so that work done on one manufacturer's system could be processed on another's. It is almost always impossible to transfer the medium (usually a floppy disc) from a system of one make to a system of another make. The state of affairs is similar to that with video tapes, with several systems vying for market dominance. There ought to be some standardization in this field. The problem can be overcome by transmitting data from one machine to the other via telephone lines and modems, but even this is apparently not without its problems, and can be costly. Anyone who is enterprising enough to offer a disc conversion service could do very well.The voice of users is not heard loudly enough in the word processing world, and it is time there was an organization to put pressure on manufacturers for greater standardization, not only of media formatting, but of terminology such as command code names. Some makers use delete, other erase, others rub out, and so on.You wanted to hear my reaction to machine aids, especially word processors. I hope it has emerged from what I have said that my reaction is, in two words, 'terrific, but...'As you can tell, I am very enthusiastic about word processing, but it seems to me that they are about where aircraft were in 1930. I doubt whether we shall have to wait fifty years for them to advance by the same degree as aircraft have! Designers of both hardware and software need to be more open to the psychological difficulties of working with such machines; they need to pay even more attention to human factors. Many word processors ask too much of the user's memory, and do not always show the text on the screen exactly as it will appear when printed. Users need more feedback from the screen to keep them informed. But users also need to adapt, and develop the flexibility of mind to cope with word processors. Future users-now at school-need to be taught a less rigid view of text as simply ink on paper. They are going to need to work with text on a screen-what I call 'liquid text.' It is only when you have worked day after day with a word processor for some time that your conventional concepts break down and you begin-little by little-to glimpse the possibilities.I recently heard it said that word processors are to typewriters what telephones are to pigeons. I wonder whether the typewriter will survive as well as the pigeon has. | null | null | null | null | Main paper:
:
MY REACTION TO word processors? All in good time. First I thought I would brighten your day with a cutting from The Guardian. (Overseas visitors may not have heard of the Guardian's reputation for classic misprints.) In the Guardian of 15 October 1980, page 15, an article by Hamish Macrae refers to the 'world processor'. This misprint will be seen by some as prophetic:...the world processor! A fascinating thought. But if we define a person's world as their life and work, and a processor as something which brings about changes, that misprint is not so far from the truth.Telling someone you are a translator is a guaranteed conversation-stopper-except in the unlikely event that the other person is a translator as well; that guarantees plenty of conversation, especially if, like me, you believe that translating can be done quicker and better with the help of some of the electronic aids around today with two-or even ten-fingers and a typewriter, and the person you are talking to disagrees.I am described as a 'small user'. I could hardly be smaller without disappearing altogether, since I operate as a freelance translator and have no employees. (My typists work on a casual basis.) But I could not produce the quantity (or, for that matter, the quality) of translation work that I do, without reliable, specially-adapted dictating machines and powerful, well-designed text processing equipment.A quick personal outline for those who do not know me: I translate into English from Swedish and German (mainly) and Dutch and French (occasionally). The work normally has the central theme of 'technology'. It is mainly in the form of technical instruction manuals, brochures and catalogues for industrial plant, electronic equipment, and so on. Most of the work comes from Sweden and Germany, some from other countries; some comes from direct clients, some from agencies. They all have high standards, and expect the same of the freelances who work for them. To keep my clients I have to produce work that meets their standards not only linguistically but also in its presentation. And I have to meet their delivery deadlines as well. Of course this can be done with a typewriter, but I maintain that it can be done far better with a word processor. Even so, there are disadvantages, and because they are not so evident, I shall devote at least as much time to those as to the advantages.I now have an annual output of well over half a million words, and today I plan to talk about the machine aids that have made this possible for me, and the techniques I use. Of course there are people who achieve such an output by conventional means, but I would certainly find it hard to do so, especially while maintaining a high standard. Translators are an individualistic breed, and I have found that personality counts for a great deal in translating. What is right for me may not be right for another translator.The basis of my relatively high output is dictating. I am convinced that dictating is the most efficient way of translating. It is a technique that many translators find hard to acquire, but those who master it seldom abandon it. I fail to understand how anyone can be a successful full-time freelance without learning this technique.The dictating equipment I use does away with the hand microphone which I used to find such a nuisance: I was for ever putting it down to use reference books and make notes, and then having to pick it up again before continuing. What I wanted was 'hands-free' dictating. When I first saw the machine I now use, what interested me about it was the desk-top microphone unit for use at meetings and so on. Although it had pushbuttons to control the machine, it also had a socket for a foot control. This was what I had been looking for for a long time. The final link was left to my ingenuity. I bought a set of headphones with a boom microphone-the sort used by aircraft pilots, air traffic controllers, telephone operators and so on-and slightly modified the dictating machine to accept it. The headphones also reduced the level of extraneous noise, making it easier to concentrate. The foot control covered all the functions of a hand control. This was my first specially-adapted machine aid, and I think its application in this form to translating is an innovation. I should be interested to know whether any other translator uses a 'hands-free' dictating method.Broadly speaking dictating is twice as fast as typing the work yourself, especially if you are not a touch-typist. But the product is a tape, and this has to be typed. Typists cost money and good ones (who can tell when to put 'affect' and when to put 'effect', for instance) are difficult to find. Another problem with dictating is that once you have committed yourself it is difficult to make changes on the tape, and impossible to make additions, as they would erase subsequent dictation. These problems can be overcome with a word processor, if it is properly used.What kinds of word-processing machine aid might a freelance translator consider?These are basically a conventional electric typewriter connected to a memory and storage unit which stores the text on a medium such as a magnetic card. Corrections can then be made and the corrected page reprinted. As each magnetic card stores one page of text, handling and storage of the cards becomes a major task if translations of any size are being handled. Paper consumption is as high as with conventional typing, since each corrected page has to be printed twice. The printing speed, though better than that of a fast typist, is relatively slow, and the machine cannot be used for other work while it is printing. I would not recommend a machine of this type for a translator.As their name implies, these machines have a screen which displays the text as it is typed, or as it is called up from the storage medium. They are of two main kinds: stand-alone machines and shared-logic machines. A stand-alone machine is completely self-contained, and is made up of keyboard, screen, processor, storage device, and printer. Shared logic systems have two or more work stations (ie screen and keyboard). A small user such as a freelance translator is likely to be interested mainly in standalone machines, though if throughput is consistently high there may be a case for having either a shared-logic system, or, in addition to the stand-alone machine, an input station (this is a word processor without a printer).Stand-alone machines may not be so well suited for large organizations. For such applications there are likely to be benefits in having a shared-logic (shared-facility) system with a large amount of on-line storage. This is outside the scope of this paper.The main advantages of working on a screen rather than on paper in a typewriter are:(a) Mistakes are easy to correct, and nothing ever has to be retyped. A common mistake in copy-typing is to jump from a word or phrase to the next occurrence of that word or phrase-perhaps several lines later. Of course this means retyping, but not on a word processor. The typist simply returns to the point where the passage was omitted, and types it in. The text on the screen opens up to make room for the addition. The speed and confidence of the typist are increased. (b) Another example: if the text is found to have been keyed in with the wrong margins, tab settings, pitch and so on, it is simply a matter of changing the relevant commands, and the text adjusts itself to the new layout. Similarly, an inexperienced operator who is unfamiliar with formatting (ie layout) instructions can be asked to type the text without them and they can be added later, perhaps during checking. (c) Repeated matter need only be typed once. Any piece of text, once typed and stored, can be re-used over and over again, not only at the time, but days, weeks, or even months later, and can of course be revised at will. For instance, consider a series of tables that recur on several pages, each with the same column headings and tab settings, but with different matter in the columns: on a typewriter each page would have to be typed as a separate entity, and if the work were interrupted and the typewriter were used for some other work in the meantime, the tabs might all have to be reset. With a good word processor, the complete table heading can be recalled as often as necessary, at any time, together with all the necessary tab settings. (d) A translator who deals with documents that follow a set pattern, such as data sheets, standards, specifications or annual reports, can translate the basic text (the text in which there are few or no changes from one document to the next) once only for repeated use, and need only translate the variable text.(e) Tabular and column work is easier, provided the operator is prepared to master the techniques.It is easy to describe these features, but they cannot be fully appreciated until they have been experienced first-hand. It is important when watching demonstrations of word processing equipment to be both critical and imaginative. The demonstrator is not a translator, and though you may show examples of work, and the salesman may say, 'No problem, our machine can do that standing on its head,' it is not the specimen of finished work that counts in that situation, but the process by which the result is achieved. The salesman sees only the sheet of paper you show him, and visualizes it as a copy-typing task, but the unique aspect of typing for a translator is the existence of the source text, and the fact that the typist has to combine audio-typing with scanning the source text for layout and so on. This is unlike most word processing applications.I am strongly in favour of a screen which shows as much as possible of a page at once. Salesmen whose machines show only half a page will tell you that tests have shown it is not necessary to see more at a time. I do not agree. It is useful to be able to display a good width as well, say landscape A4.More than any other factor, it was the versatility of the screen that sold me the machine I use.Most screen-based systems will allow you to work on to the screen while the printer is running, though there may be a few minutes' delay at the beginning of a long print job.I do not intend to deal with the financial aspects of acquiring and operating a word processor in this paper. Translators and their circumstances differ so greatly that it would be inappropriate. Nor shall I be talking of percentage increases in output or savings in typing time. I do not think in those terms. My advice would be to talk to users and make up your mind on the practical aspects first. Ask yourself whether you could adapt to working with a machine of this kind; many people have an irrational aversion to even the simplest technological devices; your typist may be one of them. You need to be enthusiastic and highly motivated to make a success of it, because it takes time to master the techniques, and it can be very frustrating at times. Only start thinking about the economics when you are quite sure you (and your typist if you use one) could get on with a word processor.(1) Do you type your own work or do you use one or more typists?A freelance translator considering word processing should look carefully at his current working methods and decide how and where a word processor would fit in. It will almost certainly lead to changes. I know two established translators who cannot understand why I use a word processor. They dictate their work, and send it out to typists who work at home.Their clients are prepared to accept typed work with handwritten corrections, and neither translator sees any reason to change the system. Many freelance translators use similar arrangements, and if you are one of these and you get a word processor you immediately face the following problems: -your typist has to come to the machine; -you can only use one typist at a time; -you cannot use the machine (for glossary work or for checking, for example) while the typist is working. You are likely to find yourself working on the machine when it is free, perhaps in the early morning, or at evenings and weekends. It is a matter of organization: if your output is 3000 words a day, your typist may take 4 hours to type this on the word processor, leaving you 4 hours of an 8-hour day to check and print the work. But to do this you need a good, committed typist, and the work must be straightforward. Complex layouts or tables disrupt this pattern. Even if you can persuade a typist to come to the machine instead of working at home, she will have to be trained, and it is far from certain she will take to it. In my experience, the rule seems to be: the more accustomed a typist is to a conventional typewriter, the less likely she is to adjust easily to a word processor.(2) What kind of work do you do?In theory anything that can be done on a typewriter can be done on a word processor, but I have yet to see a word processor that can cope well with formulas and equations, or even one that shows superscripts and subscripts properly on the screen.(Customers may indeed be 'prepared to accept typed work with handwritten corrections', but they might be delighted to accept work without a single correction, and to have the opportunity of getting a new printout without retyping if they wish either to alter the source text, so that the translation must be altered correspondingly, or to alter the translation, for reasons of their own. This must be a selling point, at least with customers who are aware of the possibilities of word processing. Most of my customers require a standard of presentation equal to or better than that produced within their company. It is a matter of professional pride with me to supply work to camera-copy standard at all times.Operator adjustment A word processor is not a typewriter, and ought never to be seen as one. It is a special kind of computer, a dedicated computer, in the jargon, and to use it successfully the operator must realize that he is not a typist but a programmer. When you type on a typewriter a character appears on the paper whenever you hit a key. That operation is a single event-a mechanical operation, not a program. When you type text on a word processor you are keying in a program, a series of instructions for the printer. Like any program, these instructions can be checked, modified, copied, manipulated-in a word, processed-before they finally reach the printer, and subsequently, of course.Because the word processor interprets its instructions literally, and is totally intolerant of human error or vagueness, the instructions you give it must be perfect. There is no room for error. On a typewriter you see your mistakes as soon as you make them; you can then correct the page or retype it. Not just typing errors, but layout mistakes as well. On a word processor some errors may not become apparent until you finally print out the work. One major cause of difficulty has to do with line spacing. Most screens show only single spacing in running text. Special commands are given to obtain one-and-a-half or double spacing, but the text may not show the different spacing on the screen, only on printout. It is surprising how frequently extra half line spaces are used in text. A related problem is that subscripts and superscripts may not appear as such on the screen, though again they can be obtained on the printer by entering special commands.When you put a blank sheet in a typewriter you immediately make a decision: where is the paper to be positioned along the platen? This will affect where the margins will be set. On a word processor, there may be technical reasons for not positioning the paper towards the left, as one normally does in a typewriter. It may have to be in the middle. This may have to be allowed for when keying in any formatting commands. You have to think in terms of 10, 12 or 15 pitch-the number of characters per inch, and work out the margin and tab settings in terms of numbers, whereas on a typewriter one generally sets them mechanically by eye. It helps to visualize a page as a grid on which each character position is defined vertically by a line number and horizontally by a character position number.To save a great deal of measurement and calculation, I have had transparent overlay grids drawn (a section of the 10-pitch one is reproduced on paper in Appendix 1) which can show the horizontal and vertical positions of any character on a page. This is particularly useful with translation work, since one often has to work to the layout of the original, leaving gaps for illustrations and so on.All these instructions must be given to the machine via the keyboard in the form of command codes, and it is this that can give rise to operator adjustment problems. For some typists it involves too much thought, or rather, a new way of thinking. It is unsafe to assume that a good conventional typist will be a good word processor operator. Quite a different mentality is required.You may wonder what can be so difficult about it. The difficulty lies not so much in the machine as in the common human aversion to the unfamiliar, the new, the different. People still struggle with the 24-hour clock, and it will be decades before metric weights and measures are fully accepted. Most of us still hanker secretly for good old pounds, shillings and pence: 'real money', we call it. The difficulty comes not in learning something completely new, but in having to unlearn a skill that has become part of you.When I translate, I adopt what I describe as the 'creative' approach. Many fellowtranslators disagree with me when I describe translating as a form of creative writing, but I maintain that when I translate, I am creating a new piece of English, on the basis of the meaning conveyed to me by the source text. I am trying-as best I can-to write what the author might have written if he had been English. Like all writers, I need to go over what I have written. This is not just a matter of 'polishing the style', though that certainly comes into it. There may be passages that were ambiguous in the original and that need to be edited in the light of new information; technical terms may need to be changed throughout the work, and so on.My aim in purchasing the text processor was to progress from the 'dictate-type-check/edit-retype' pattern to the 'dictate-type (on machine)check/edit (on screen)-print' pattern. See Appendixes 2 and 3.Appendix 4 shows in detail how a freelance translator might work with a word processor. WP in the right margin indicates use of the word processor.With a word processor it is an easy matter to compile lists of technical terms-glossaries-in two languages. To me this is one of the greatest benefits of the machine. Working for many different customers makes it difficult to keep their preferred terminology in one's head, and I make a practice of keeping glossaries for each client or, if the client is a translation agency, for each ultimate client.I am often asked whether the machine will arrange word-lists like this in alphabetical order. It doesn't, and I do not need it to. The glossary is built up entry by entry as the work progresses, and entries are simply added at the correct alphabetical position. The advantage of doing this on a word processor is that when a new entry is added, the existing text opens up to admit it.Usually the client receives a copy of the glossary with the work. This helps to keep terminology uniform throughout their publications. If I receive any more work from the same client, the glossary would be added to, and the updated version sent.When I was deciding which machine to buy, almost all my clients with word processing systems thought I should buy the make of word processor they had bought. I would have had to buy three or four machines to satisfy them all! I decided to buy the one I liked best. But my clients and I find the incompatibility between different systems very annoying. Surely word processor sales would increase if all systems used a common standardized medium so that work done on one manufacturer's system could be processed on another's. It is almost always impossible to transfer the medium (usually a floppy disc) from a system of one make to a system of another make. The state of affairs is similar to that with video tapes, with several systems vying for market dominance. There ought to be some standardization in this field. The problem can be overcome by transmitting data from one machine to the other via telephone lines and modems, but even this is apparently not without its problems, and can be costly. Anyone who is enterprising enough to offer a disc conversion service could do very well.The voice of users is not heard loudly enough in the word processing world, and it is time there was an organization to put pressure on manufacturers for greater standardization, not only of media formatting, but of terminology such as command code names. Some makers use delete, other erase, others rub out, and so on.You wanted to hear my reaction to machine aids, especially word processors. I hope it has emerged from what I have said that my reaction is, in two words, 'terrific, but...'As you can tell, I am very enthusiastic about word processing, but it seems to me that they are about where aircraft were in 1930. I doubt whether we shall have to wait fifty years for them to advance by the same degree as aircraft have! Designers of both hardware and software need to be more open to the psychological difficulties of working with such machines; they need to pay even more attention to human factors. Many word processors ask too much of the user's memory, and do not always show the text on the screen exactly as it will appear when printed. Users need more feedback from the screen to keep them informed. But users also need to adapt, and develop the flexibility of mind to cope with word processors. Future users-now at school-need to be taught a less rigid view of text as simply ink on paper. They are going to need to work with text on a screen-what I call 'liquid text.' It is only when you have worked day after day with a word processor for some time that your conventional concepts break down and you begin-little by little-to glimpse the possibilities.I recently heard it said that word processors are to typewriters what telephones are to pigeons. I wonder whether the typewriter will survive as well as the pigeon has.
Appendix:
| null | null | null | null | {
"paperhash": [],
"title": [],
"abstract": [],
"authors": [],
"arxiv_id": [],
"s2_corpus_id": [],
"intents": [],
"isInfluential": []
} | null | 531 | 0.001883 | null | null | null | null | null | null | null | null |
8dd47e0315832e5c94395ad82df3ec5641b72ccf | 33457554 | null | Terminological Data Banks: a model for a {B}ritish Linguistic Data Bank ({LDB}) | A description of a model linguistic data bank (LDB) for a British market will be given, based on the results of a continuing feasibility study. A LDB represents an economical and highly efficient way of organizing Britain's efforts in the field of terminology, both with respect to English and the many foreign languages through which contact is maintained with non-English speaking countries. The institutional and organizational structure will be outlined. Emphasis will be placed on services to be provided to various groups, and in particular to translators, and on the important role these groups will play in assuring the continued viability and relevance of the LDB, not only as users, but as contributors and advisers. Data acquisition policy and financial aspects will be considered. A multilingual, multidisciplinary British LDB will provide translators with a valuable service, whose applications are many, whose products are varied to cater for a wide range of needs, whose terminology is continually revised and updated and whose modes of consultation are several. | {
"name": [
"McNaught, John"
],
"affiliation": [
null
]
} | null | null | Translating and the Computer: Machine aids for translators | 1980-11-01 | 4 | 3 | null | THIS PAPER IS based on results obtained from a continuing feasibility study of the establishment of a terminological data bank in the United Kingdom, a study being carried out at UMIST under the auspices of the British Library.I shall use the term Linguistic Data Bank (or LDB) in preference to Terminological Data Bank, as many of the banks we investigated in the course of this study do not restrict themselves to handling terminological data alone. Thus LDB represents a more accurate designation of the types of information systems we will be discussing.I shall concentrate primarily on work being done in this country towards the establishment of a British LDB, but shall make reference to other LDBs abroad by way of exemplification and illustration. Indeed, I would urge you to keep in mind during this talk that, when I describe possible features of a British LDB, these features already exist in other LDBs. I am not describing services or facilities or search methods that could exist. In our proposals for a model of a British LDB, we have translated the assumedly best features of LDBs abroad to the context of a British market. Where Britain may hope to achieve a measure of innovation in LDB operation is in the use of the most up-to-date technology and software, exploiting information networks and the move towards office and home computers, etc, and in reaping the benefits of recent terminological research. There are significant advantages to be gained by being a late-comer in this field, not the least of which is to be able to study the reaction of users to existing LDBs, and so to be able to design a LDB which will suit users' needs.There are three sections to this paper: Part I deals with the reasons behind the feasibility study; Part II is a description of the phases of the study; and Part III is a presentation of a model for a British Linguistic Data Bank.The reasons and considerations behind the feasibility study are several-I shall mention only the most important:Special language communication. This involves the constant creation of terms to designate concepts, objects, measurements, products, etc. These designations (terms) differ from the words of general language, in that they refer more specifically than words, in that they are mainly used by specialists, in that they are often created according to established patterns and precedents, in that they are susceptible to standardization and in that they may be relatively short-lived and changed in the light of discoveries and developments.Efficient communication. This depends on common agreement, and can only be achieved by widespread knowledge of terms (in our case) or by easy access to terminological information. The problems of efficient communication apply with even greater force across language boundaries.Efficient special language communication. There are many different groups involved in the use and creation of terminology; all groups must have access to terminologies, both their own, and those of other disciplines.'Information explosion'. The immense upsurge in technological innovation and the concomitant upsurge in new terminology, together with the great increase in multilingual communication needs, means that the work of collecting, storing, sorting and disseminating terminology cannot be carried out efficiently by dispersed methods, especially when contact must be maintained with LDBs abroad housing foreign language data.Lack of single authoritative organization in the UK. There is no single organization in the UK able to provide authoritative guidance on English usage of specialized terminology. Note that I do not say standardized terminology: the BSI do a laudable job in this area. Specialized terminology, however, is another matter, in that both standardized and non-standardized terms are present. One is dealing with the special languages of different disciplines, with the grey areas where the terminologies of disciplines meet, with in-house usage vis-a-vis wider usage, etc. There is no national centre for terminology, no centre which has close links with other bodies concerned with the production and regularization of usage of specialized terminology. There is also a distinct lack of links with foreign LDBs-no central body capable of negotiating the exchange of data with a foreign LDB, for example.Existence of other LDBs. In recent years, major industrial countries and international organizations have established LDBs. LDBs in multilingual form exist in (nos. of main LDBs in brackets) Canada (2), at the Commission of the European Communities, in France (1), the Federal Republic of Germany (4), the German Democratic Republic VOL. 33, NO. 7/8(1), Sweden (1) and the USSR (2) . In Denmark, plans are well advanced for the establishment of DANTERM. The UN plans to establish its own LDB, as does UNI, the Italian Standards Institution. In Spain, HISPANOTERM is of recent creation. Further information on these LDBs may be gained from Sager & McNaught 1 . Great Britain is the only major industrial nation without such a service facility, that is, a centre for the processing of all kinds of terminological data.There is a substantial amount of work being done in Britain, however, related to thesauri for indexing and retrieval purposes. One of the most important contributions Britain has made in this field is towards the development of the ISONET thesaurus, which is a computerized, controlled vocabulary of some 11.5 thousand descriptors and 5.5 thousand non-descriptors used for the selection of descriptors for indexing and searching standards and technical regulations on ISONET databases. The thesaurus consists of a classified subject display and an alphabetical list (the index to the display) and, though developed at the moment only as a bilingual English-French version, is designed to be both multilingual as well as multidisciplinary. The BSI team responsible for the development of the English part of the thesaurus has helped to produce not only an excellent indexing and information retrieval tool, but also a database whose contents contain a valuable store of terminological information.English terminology. All the foreign LDBs mentioned contain, or will contain, substantial amounts of English terminology, at least as translation equivalents, and such vocabulary may be misleading. The impact of LDBs on the usage of English terminology outside the UK will increase, and may, without British involvement, introduce usage unacceptable or even incomprehensible to this country.There is a serious danger that the international role of English as a means of communication may be impaired if a single, national British centre for terminology does not exist. Moreover, as many languages create new terms on the basis of English, uncontrolled elaboration of English terminology in a number of different centres has far-researching consequences for effective communication in other languages and between these languages and English.Nairobi Recommendation of UNESCO. Paragraph 12 of this document, on the legal protection of translators and translations, reads:'12. Member states should consider organizing terminology centres which might be encouraged to undertake the following activities:(a) communicating to translators current information concerning terminology required by them in the general course of their work; (b) collaborating closely with terminology centres and developing the internationalization of scientific and technical terminology so as to facilitate the task of translators.' Aslib 1978 conference on 'Translating and the Computer'. The audience of this conference expressed a strong interest in LDBs, and many of the organizations we have contacted during the course of this study were represented at this conference.On the basis of the above reasons and considerations, the project seeks to establish the following:In phase one: -the use made of LDBs in other countries -the cost and financing of other LDBs -the institutional and organizational framework of other LDBs -the availability and quality of data for a British LDB In phase two: -the possible uses of a LDB in the UK -the possible structure of a British LDB The study itself was split into three phases: , which now has its own LDB and document retrieval system (DITR) and TERMDOK (Tekniska nomenklaturcentralen), Stockholm, which collaborates very closely with SIS, the Swedish Standards Institution. Two main methodological approaches to LDB data organization exist, exemplified by EURODICAUTOM on the one hand, which stores keywords and their contexts, in the belief that translators are best served by supplying them with terms in context, and LEXIS on the other, which records terms in isolation, preferring to work from concepts.The facilities, services, institutional and organizational structure of these major European banks were investigated, as was the functioning of other major LDBs in Europe and elsewhere, through consultation of the literature and via correspondence.Of great interest to us were the various systems used by LDBs to finance their operations, and to establish links with their users. Here we investigated the partnership systems set up by TEAM and TERMDOK, where partners contribute terminology in return for services, and subscriber systems such as the one operated by NOR-MATERM. Links with users, and methods of elaborating terminology, were studied especially in relation to TEAM, TERMDOK and DANTERM. This latter has a policy of sending terminologists into the field to develop and research terminology on the spot. TERMDOK has a smoothly-running system of committees which elaborate new terminology in conjunction with industry, etc, and has wide user links in many sectors. TEAM provides a good example of how a partnership system may operate to the benefit of all members. This particular partnership system unites many different groups and organizations, both in West Germany and in other countries, eg Philips, and the Dutch Foreign Ministry. These groups all contribute terminology to TEAM and have access to all TEAM terminology free of charge, payment only being asked for actual processing time.In the light of the above-mentioned reasons and considerations, and given the interest manifested by many different types of user, the preliminary proposal for a British LDB is not for a LDB conceived primarily for translators, or standardization specialists, but for a LDB that will serve a wide range of users, and provide a wide range of services. This proposal is also based on the analysis of results from Phase I, where a trend was perceived among well-established banks to move towards providing a wider variety of services to a wider number of user groups: TERMDOK, for example, has recently converted to a large multi-user online system, in order to serve an ever widening range of users; EURODICAUTOM, now available on EURONET-DIANE, is now expanding to meet varied demands. TEAM system was among the first to realize the need for and benefit to be gained from serving different types of user, and the success of this system, with its many partners active in contributing terminology in many fields, and its diversified services, catering for translators, publishers, standardization specialists, information scientists and language teachers, has been a great inspiration to us.There is a concomitant trend for proposed or newly-created LDBs to emphasize multifunctional and multidisciplinary aspects, eg DANTERM, which intends serving translators, technical writers, standardization specialists, publishers, students and teachers of Danish Schools of Economics, and other institutes of higher learning.We have thus proposed that a British LDB be established with the following characteristics: -multifunctional -multilingual -multidisciplinary -widely accessibleThe advantages of such a LDB are: -increased reliability and accuracy of data -production at little cost of a great variety of up-to-date glossaries and dictionaries -direct consultation on/off-line by organizations and individuals -agreement can be reached and maintained between English usage of terminology at home and abroad -a greater inflow of literature in foreign languages will be generated, which in turn will generate more demand for translations -increased and more effective communication with foreign countries, with direct benefits for exporting, especially. It would seem, on the face of it, that these characteristics and advantages are viewed wholly from an organizational point of view. However, every effort has been made to ensure user orientation of the LDB remains paramount. Without users taking an interest in the creation, development and running of services, an LDB will be a white elephant. Users are the life-blood of a LDB, not just in the role of end-users, but in the role of contributors and advisers. Wide user involvement will ensure that terminology is acquired, elaborated and disseminated in fields and in languages of immediate relevance to users, that services provided will be relevant to user's needs, and, with online searching and input, be as 'user friendly' as possible, and that appropriate measures may be taken to take into account origin of data, conditions of use, and copyright.During the second phase of our study, then, we concentrated on the needs and expectations of users. We approached those people who would be likely to use a LDB (ie the staff translator as opposed to the company chairman), and supplied them with documentation on existing LDBs and preliminary specifications of a British LDB for discussion purposes. We then sent them a detailed questionnaire. Follow-up enquiries were then made, as many as time and manpower resources would permit.Results enabled us to construct typical 'user profiles': type of work carried out, the manner in which people worked, the subject areas covered, the search and output facilities desired, etc.Comments were also obtained on the organizational and institutional structure of a LDB, on how a LDB should be financed, on which areas of terminology should receive prime attention, on which languages should be developed, on exactly what information different users expected to obtain.Our final report to the British Library will therefore express the wishes of potential users. LNB data acquisition policy and services should be guided by those who will use the LDB, not imposed from above.In order to cater for numerous different user groups, uses and services of the proposed LDB should be several and varied. The main aim is to provide complete flexibility of search and output facilities. It is proposed that users should not receive exhaustive information on a term, as normally they work at any one time with subsets of term record fields. Study of user profiles reveals that certain user groups prefer to work with certain fields. Thus it is proposed that pre-specified 'packages' be offered eg term + translation equivalent + source, or term + definition, or term + translation equivalent + context, and so on. We are grateful to Mr Arthern for advice regarding such packages 3 . Also, we propose that users should be able to define their own search and output facilities from among those available, thus a request from user X would produce by default output in the format he has previously specified, information which the computer can gain through for example inspection of formats associated with user identification codes.When working conversationally, output can be given in graduated form, a refinement of the 'package' technique. That is, one may be interested in receiving primarily term + translation equivalent. In many cases, this information may be enough for one's particular purpose. However, in case of doubt, one should be able to receive further information, simply by pressing a button, eg source + context. Which information, how much and in which order, are choices that should be left to the user. Search operations so far defined are: (1) single term search ie a defined sequence of characters (this could be a Uniterm or a multiterm) (2) arbitrary string search eg one may wish to output all terms beginning with, or containing, a certain sequence, for example, 'ethyl' or 'inter'. (3) abbreviation (4) list of terms, where information common to all terms in the list is required, eg one may wish to see whether a list of terms have the same source, or perhaps the same synonyms. Numbers 1 and 3 involve searches of specific fields, whereas numbers 2 and 4 involve general field searches. These may be undertaken in either online or offline mode.Online conversational mode should also allow: (1) paging in the alphabetic order of the source or target language (paging is equivalent to browsing through the data base) (2) paging in the systematic order of the source or target language (3) paging through successive multiterms beginning with or containing the query term.The above search operations can be made more sophisticated using 'intelligent' search techniques eg if a Uniterm exists as part of a multiterm only, then the computer should be able to find it. If no match is found, interaction with the user may take place eg the computer may prompt the user to supply a synonym, and then carry on the search with this new information. Manual or automatic morphological truncation JULY/AUGUST 1981 TERMINOLOGICAL DATA BANKS of terms will also prove useful, in case, for example, a term is input in the plural form, when the stored term exists in the singular. The major aim of introducing 'intelligent' searching is to ensure that the computer carries out an exhaustive search, and that even when this fails, it is able to be as helpful as possible, by offering related and relevant information.Given that output formats are dependent on the needs of individual users, it is proposed that fully parameterized output options be offered. That is, users should be able to choose not only which information they want, and in which form, but also such details as eg page coverage, line spacing, number of columns, character set, type of 'package', and so on. As operations on an open set of options are involved, only a few possibilities are mentioned:Two basic types of information are usually required by users: (1) a complete term record, or selected fields thereof, perhaps 'packages'; (this is typically for online use) (2) selected fields of more than one term record, output in the form of, for example: monolingual alphabetic indices egs term + generic term bilingual indices term + synonym + translation equivalent text-oriented glossaries term + synonym + source + translation equivalent alphabetic/systematic glossaries (many other combinations possible) by subject area(s) by language(s) by project(s) by source(s) etc. phraseological glossaries concordances keyword indices full-scale dictionaries etc.In order to provide a wide range of services to a wide range of users, the following output media are necessary: -VDU or visual display unit. This has the advantage of offering great versatility. For example, one may receive anything from screenfuls of information to single lines. Screen 'windows' may be employed. A translator could have one section of his VDU screen reserved as a working space, another for calling up information from the LDB. Various types of terminal exist, such as 'slave' terminals, which are directly connected, and have no processing capability of their own, or 'intelligent' terminals, which, as the name suggests, have a certain amount of individual processing power. One may of course wish to connect one's own office or home computer to the LDB, via a VOL. 33, NO. 7/8 telephone link. It would also be possible to work totally independent of the LDB (see below), using one's own computer.-Hard copy. A variety of printers should be available, to provide various degrees of quality output, which could be supplied on various types of paper. A user should also be able to receive information on his own printer. Updated printouts of eg glossaries could be sent on a regular basis. -Microfiche. Advantages here are low cost and regular updates. Such media are very useful for infrequent but detailed searches. Also, it may be the case that some users may prefer to work with hard copy or microfiches to begin with, especially in the early development years of the LDB, and only acquire conversational capability at a time when the LDB can supply a useful number of responses in relevant subject fields. -Magnetic tape. Such a medium may be useful for eg publishers, who wish to submit many thousand terms for processing. Tapes will also be used for exchange purposes with other LDBs and terminology centres. We have already mentioned several advantages of a LDB in passing. Here I would just like to draw your attention to the advantage of using a LDB over looking in a dictionary, or consulting a subject specialist, two of the most widely used methods of solving a terminological problem.With a dictionary, you may find that even a recently published edition may be outof-date. Consulting a subject specialist may be fruitless, as he himself may not be aware of the term.A dictionary is time-consuming to use (especially if you share one, and someone else is using it) and consultation may likewise be time-consuming (especially if your specialist has gone for coffee, or you enter into a conversation).With a dictionary, you must know how it is organized, in order to be able to use it efficiently. Consultation may involve lengthy explanation of the context, or description of the conceptual environment of the term.When searching in a dictionary, one is usually confined to a 'main entry' type of search. Also, the dictionary, being printed on paper, is of fixed format, and so may not be suited to your especial needs. Other disadvantages of a dictionary are that it is bulky, prone to wear and tear, and not particularly cheap. Failure to find a translation equivalent for a term, for example, whether as a result of dictionary look-up, or of consultation, may encourage creation of a neologism, which in many cases may hamper communication, rather than aid it.When one looks at a LDB, however, the following advantages are immediately apparent to the professional linguist. The LDB's terminological data are up-dated constantly, with new terms being inserted, obsolete terms excised, and new information being inserted on existing terminology. Access to the LDB can be very rapid, if working on-line, or with one's own subset of the LDB on floppy disc, or with a microfiche or printout tailored to one's needs. The LDB can be considered as a 'black box': the user does not need to know how the data are organized inside the machine. He is helped in his search by powerful search routines, whose existence he is again unaware of, as his contact with the LDB is through a query language which is constructed so as to be as 'user friendly' as possible, and may in fact represent a restricted subset of his own language. These powerful search routines mentioned ensure that a search is as exhaustive as possible. The computer can carry out parallel searches in a number of different data bases, in a number of different term records, and so on, comparing, correlating, combining information in order to produce not just a correct response, but, in the case where a primary search proves negative, a response that may go some way to providing the user with at least some information regarding his query. Exhaustivity and reliability, combined with the authority of a well-run and widely respected institution, will ensure that the LDB offers practical and useful services to all its users.At this stage, no decisions can be taken regarding costs to the user. However, analysis of practices in other LDBs suggests several methods of payment for services. It is to be hoped that a British LDB will offer a combination of these, suited to individual users' needs. Methods employed in other LDBs are:Sponsorship system. This would involve an annual grant in return for free use of the LDB, a system practised by NORMATERM.Subscriber system. This would involve for example a monthly sum giving credit at special rates, again a system used by NORMATERM.Ad hoc system. This involves payment on a time or unit basis, and is a method practised by all LDBs.Contributor system. Supplying data free of charge against payment or in return for services is a system offered by eg TEAM.Partnership system. This would involve supplying data in return for credit to use the LDB, and is a system practised by TEAM and TERMDOK, with a great deal of success.Most of the services of the LDB would be non-competitive, as they would not be available in any other way. On the other hand, users will consider paying for these services only if they lead to a reduction in their own costs, if they represent a necessary improvement in the quality of their work, ultimately reflected in greater income to justify this expenditure, or if they contribute in some other way to increased productivity, new products or services. If for example the job-satisfaction and productivity of translators can be increased by, say, 10%, the translator, or his employer, | null | null | null | null | Main paper:
:
THIS PAPER IS based on results obtained from a continuing feasibility study of the establishment of a terminological data bank in the United Kingdom, a study being carried out at UMIST under the auspices of the British Library.I shall use the term Linguistic Data Bank (or LDB) in preference to Terminological Data Bank, as many of the banks we investigated in the course of this study do not restrict themselves to handling terminological data alone. Thus LDB represents a more accurate designation of the types of information systems we will be discussing.I shall concentrate primarily on work being done in this country towards the establishment of a British LDB, but shall make reference to other LDBs abroad by way of exemplification and illustration. Indeed, I would urge you to keep in mind during this talk that, when I describe possible features of a British LDB, these features already exist in other LDBs. I am not describing services or facilities or search methods that could exist. In our proposals for a model of a British LDB, we have translated the assumedly best features of LDBs abroad to the context of a British market. Where Britain may hope to achieve a measure of innovation in LDB operation is in the use of the most up-to-date technology and software, exploiting information networks and the move towards office and home computers, etc, and in reaping the benefits of recent terminological research. There are significant advantages to be gained by being a late-comer in this field, not the least of which is to be able to study the reaction of users to existing LDBs, and so to be able to design a LDB which will suit users' needs.There are three sections to this paper: Part I deals with the reasons behind the feasibility study; Part II is a description of the phases of the study; and Part III is a presentation of a model for a British Linguistic Data Bank.The reasons and considerations behind the feasibility study are several-I shall mention only the most important:Special language communication. This involves the constant creation of terms to designate concepts, objects, measurements, products, etc. These designations (terms) differ from the words of general language, in that they refer more specifically than words, in that they are mainly used by specialists, in that they are often created according to established patterns and precedents, in that they are susceptible to standardization and in that they may be relatively short-lived and changed in the light of discoveries and developments.Efficient communication. This depends on common agreement, and can only be achieved by widespread knowledge of terms (in our case) or by easy access to terminological information. The problems of efficient communication apply with even greater force across language boundaries.Efficient special language communication. There are many different groups involved in the use and creation of terminology; all groups must have access to terminologies, both their own, and those of other disciplines.'Information explosion'. The immense upsurge in technological innovation and the concomitant upsurge in new terminology, together with the great increase in multilingual communication needs, means that the work of collecting, storing, sorting and disseminating terminology cannot be carried out efficiently by dispersed methods, especially when contact must be maintained with LDBs abroad housing foreign language data.Lack of single authoritative organization in the UK. There is no single organization in the UK able to provide authoritative guidance on English usage of specialized terminology. Note that I do not say standardized terminology: the BSI do a laudable job in this area. Specialized terminology, however, is another matter, in that both standardized and non-standardized terms are present. One is dealing with the special languages of different disciplines, with the grey areas where the terminologies of disciplines meet, with in-house usage vis-a-vis wider usage, etc. There is no national centre for terminology, no centre which has close links with other bodies concerned with the production and regularization of usage of specialized terminology. There is also a distinct lack of links with foreign LDBs-no central body capable of negotiating the exchange of data with a foreign LDB, for example.Existence of other LDBs. In recent years, major industrial countries and international organizations have established LDBs. LDBs in multilingual form exist in (nos. of main LDBs in brackets) Canada (2), at the Commission of the European Communities, in France (1), the Federal Republic of Germany (4), the German Democratic Republic VOL. 33, NO. 7/8(1), Sweden (1) and the USSR (2) . In Denmark, plans are well advanced for the establishment of DANTERM. The UN plans to establish its own LDB, as does UNI, the Italian Standards Institution. In Spain, HISPANOTERM is of recent creation. Further information on these LDBs may be gained from Sager & McNaught 1 . Great Britain is the only major industrial nation without such a service facility, that is, a centre for the processing of all kinds of terminological data.There is a substantial amount of work being done in Britain, however, related to thesauri for indexing and retrieval purposes. One of the most important contributions Britain has made in this field is towards the development of the ISONET thesaurus, which is a computerized, controlled vocabulary of some 11.5 thousand descriptors and 5.5 thousand non-descriptors used for the selection of descriptors for indexing and searching standards and technical regulations on ISONET databases. The thesaurus consists of a classified subject display and an alphabetical list (the index to the display) and, though developed at the moment only as a bilingual English-French version, is designed to be both multilingual as well as multidisciplinary. The BSI team responsible for the development of the English part of the thesaurus has helped to produce not only an excellent indexing and information retrieval tool, but also a database whose contents contain a valuable store of terminological information.English terminology. All the foreign LDBs mentioned contain, or will contain, substantial amounts of English terminology, at least as translation equivalents, and such vocabulary may be misleading. The impact of LDBs on the usage of English terminology outside the UK will increase, and may, without British involvement, introduce usage unacceptable or even incomprehensible to this country.There is a serious danger that the international role of English as a means of communication may be impaired if a single, national British centre for terminology does not exist. Moreover, as many languages create new terms on the basis of English, uncontrolled elaboration of English terminology in a number of different centres has far-researching consequences for effective communication in other languages and between these languages and English.Nairobi Recommendation of UNESCO. Paragraph 12 of this document, on the legal protection of translators and translations, reads:'12. Member states should consider organizing terminology centres which might be encouraged to undertake the following activities:(a) communicating to translators current information concerning terminology required by them in the general course of their work; (b) collaborating closely with terminology centres and developing the internationalization of scientific and technical terminology so as to facilitate the task of translators.' Aslib 1978 conference on 'Translating and the Computer'. The audience of this conference expressed a strong interest in LDBs, and many of the organizations we have contacted during the course of this study were represented at this conference.On the basis of the above reasons and considerations, the project seeks to establish the following:In phase one: -the use made of LDBs in other countries -the cost and financing of other LDBs -the institutional and organizational framework of other LDBs -the availability and quality of data for a British LDB In phase two: -the possible uses of a LDB in the UK -the possible structure of a British LDB The study itself was split into three phases: , which now has its own LDB and document retrieval system (DITR) and TERMDOK (Tekniska nomenklaturcentralen), Stockholm, which collaborates very closely with SIS, the Swedish Standards Institution. Two main methodological approaches to LDB data organization exist, exemplified by EURODICAUTOM on the one hand, which stores keywords and their contexts, in the belief that translators are best served by supplying them with terms in context, and LEXIS on the other, which records terms in isolation, preferring to work from concepts.The facilities, services, institutional and organizational structure of these major European banks were investigated, as was the functioning of other major LDBs in Europe and elsewhere, through consultation of the literature and via correspondence.Of great interest to us were the various systems used by LDBs to finance their operations, and to establish links with their users. Here we investigated the partnership systems set up by TEAM and TERMDOK, where partners contribute terminology in return for services, and subscriber systems such as the one operated by NOR-MATERM. Links with users, and methods of elaborating terminology, were studied especially in relation to TEAM, TERMDOK and DANTERM. This latter has a policy of sending terminologists into the field to develop and research terminology on the spot. TERMDOK has a smoothly-running system of committees which elaborate new terminology in conjunction with industry, etc, and has wide user links in many sectors. TEAM provides a good example of how a partnership system may operate to the benefit of all members. This particular partnership system unites many different groups and organizations, both in West Germany and in other countries, eg Philips, and the Dutch Foreign Ministry. These groups all contribute terminology to TEAM and have access to all TEAM terminology free of charge, payment only being asked for actual processing time.In the light of the above-mentioned reasons and considerations, and given the interest manifested by many different types of user, the preliminary proposal for a British LDB is not for a LDB conceived primarily for translators, or standardization specialists, but for a LDB that will serve a wide range of users, and provide a wide range of services. This proposal is also based on the analysis of results from Phase I, where a trend was perceived among well-established banks to move towards providing a wider variety of services to a wider number of user groups: TERMDOK, for example, has recently converted to a large multi-user online system, in order to serve an ever widening range of users; EURODICAUTOM, now available on EURONET-DIANE, is now expanding to meet varied demands. TEAM system was among the first to realize the need for and benefit to be gained from serving different types of user, and the success of this system, with its many partners active in contributing terminology in many fields, and its diversified services, catering for translators, publishers, standardization specialists, information scientists and language teachers, has been a great inspiration to us.There is a concomitant trend for proposed or newly-created LDBs to emphasize multifunctional and multidisciplinary aspects, eg DANTERM, which intends serving translators, technical writers, standardization specialists, publishers, students and teachers of Danish Schools of Economics, and other institutes of higher learning.We have thus proposed that a British LDB be established with the following characteristics: -multifunctional -multilingual -multidisciplinary -widely accessibleThe advantages of such a LDB are: -increased reliability and accuracy of data -production at little cost of a great variety of up-to-date glossaries and dictionaries -direct consultation on/off-line by organizations and individuals -agreement can be reached and maintained between English usage of terminology at home and abroad -a greater inflow of literature in foreign languages will be generated, which in turn will generate more demand for translations -increased and more effective communication with foreign countries, with direct benefits for exporting, especially. It would seem, on the face of it, that these characteristics and advantages are viewed wholly from an organizational point of view. However, every effort has been made to ensure user orientation of the LDB remains paramount. Without users taking an interest in the creation, development and running of services, an LDB will be a white elephant. Users are the life-blood of a LDB, not just in the role of end-users, but in the role of contributors and advisers. Wide user involvement will ensure that terminology is acquired, elaborated and disseminated in fields and in languages of immediate relevance to users, that services provided will be relevant to user's needs, and, with online searching and input, be as 'user friendly' as possible, and that appropriate measures may be taken to take into account origin of data, conditions of use, and copyright.During the second phase of our study, then, we concentrated on the needs and expectations of users. We approached those people who would be likely to use a LDB (ie the staff translator as opposed to the company chairman), and supplied them with documentation on existing LDBs and preliminary specifications of a British LDB for discussion purposes. We then sent them a detailed questionnaire. Follow-up enquiries were then made, as many as time and manpower resources would permit.Results enabled us to construct typical 'user profiles': type of work carried out, the manner in which people worked, the subject areas covered, the search and output facilities desired, etc.Comments were also obtained on the organizational and institutional structure of a LDB, on how a LDB should be financed, on which areas of terminology should receive prime attention, on which languages should be developed, on exactly what information different users expected to obtain.Our final report to the British Library will therefore express the wishes of potential users. LNB data acquisition policy and services should be guided by those who will use the LDB, not imposed from above.In order to cater for numerous different user groups, uses and services of the proposed LDB should be several and varied. The main aim is to provide complete flexibility of search and output facilities. It is proposed that users should not receive exhaustive information on a term, as normally they work at any one time with subsets of term record fields. Study of user profiles reveals that certain user groups prefer to work with certain fields. Thus it is proposed that pre-specified 'packages' be offered eg term + translation equivalent + source, or term + definition, or term + translation equivalent + context, and so on. We are grateful to Mr Arthern for advice regarding such packages 3 . Also, we propose that users should be able to define their own search and output facilities from among those available, thus a request from user X would produce by default output in the format he has previously specified, information which the computer can gain through for example inspection of formats associated with user identification codes.When working conversationally, output can be given in graduated form, a refinement of the 'package' technique. That is, one may be interested in receiving primarily term + translation equivalent. In many cases, this information may be enough for one's particular purpose. However, in case of doubt, one should be able to receive further information, simply by pressing a button, eg source + context. Which information, how much and in which order, are choices that should be left to the user. Search operations so far defined are: (1) single term search ie a defined sequence of characters (this could be a Uniterm or a multiterm) (2) arbitrary string search eg one may wish to output all terms beginning with, or containing, a certain sequence, for example, 'ethyl' or 'inter'. (3) abbreviation (4) list of terms, where information common to all terms in the list is required, eg one may wish to see whether a list of terms have the same source, or perhaps the same synonyms. Numbers 1 and 3 involve searches of specific fields, whereas numbers 2 and 4 involve general field searches. These may be undertaken in either online or offline mode.Online conversational mode should also allow: (1) paging in the alphabetic order of the source or target language (paging is equivalent to browsing through the data base) (2) paging in the systematic order of the source or target language (3) paging through successive multiterms beginning with or containing the query term.The above search operations can be made more sophisticated using 'intelligent' search techniques eg if a Uniterm exists as part of a multiterm only, then the computer should be able to find it. If no match is found, interaction with the user may take place eg the computer may prompt the user to supply a synonym, and then carry on the search with this new information. Manual or automatic morphological truncation JULY/AUGUST 1981 TERMINOLOGICAL DATA BANKS of terms will also prove useful, in case, for example, a term is input in the plural form, when the stored term exists in the singular. The major aim of introducing 'intelligent' searching is to ensure that the computer carries out an exhaustive search, and that even when this fails, it is able to be as helpful as possible, by offering related and relevant information.Given that output formats are dependent on the needs of individual users, it is proposed that fully parameterized output options be offered. That is, users should be able to choose not only which information they want, and in which form, but also such details as eg page coverage, line spacing, number of columns, character set, type of 'package', and so on. As operations on an open set of options are involved, only a few possibilities are mentioned:Two basic types of information are usually required by users: (1) a complete term record, or selected fields thereof, perhaps 'packages'; (this is typically for online use) (2) selected fields of more than one term record, output in the form of, for example: monolingual alphabetic indices egs term + generic term bilingual indices term + synonym + translation equivalent text-oriented glossaries term + synonym + source + translation equivalent alphabetic/systematic glossaries (many other combinations possible) by subject area(s) by language(s) by project(s) by source(s) etc. phraseological glossaries concordances keyword indices full-scale dictionaries etc.In order to provide a wide range of services to a wide range of users, the following output media are necessary: -VDU or visual display unit. This has the advantage of offering great versatility. For example, one may receive anything from screenfuls of information to single lines. Screen 'windows' may be employed. A translator could have one section of his VDU screen reserved as a working space, another for calling up information from the LDB. Various types of terminal exist, such as 'slave' terminals, which are directly connected, and have no processing capability of their own, or 'intelligent' terminals, which, as the name suggests, have a certain amount of individual processing power. One may of course wish to connect one's own office or home computer to the LDB, via a VOL. 33, NO. 7/8 telephone link. It would also be possible to work totally independent of the LDB (see below), using one's own computer.-Hard copy. A variety of printers should be available, to provide various degrees of quality output, which could be supplied on various types of paper. A user should also be able to receive information on his own printer. Updated printouts of eg glossaries could be sent on a regular basis. -Microfiche. Advantages here are low cost and regular updates. Such media are very useful for infrequent but detailed searches. Also, it may be the case that some users may prefer to work with hard copy or microfiches to begin with, especially in the early development years of the LDB, and only acquire conversational capability at a time when the LDB can supply a useful number of responses in relevant subject fields. -Magnetic tape. Such a medium may be useful for eg publishers, who wish to submit many thousand terms for processing. Tapes will also be used for exchange purposes with other LDBs and terminology centres. We have already mentioned several advantages of a LDB in passing. Here I would just like to draw your attention to the advantage of using a LDB over looking in a dictionary, or consulting a subject specialist, two of the most widely used methods of solving a terminological problem.With a dictionary, you may find that even a recently published edition may be outof-date. Consulting a subject specialist may be fruitless, as he himself may not be aware of the term.A dictionary is time-consuming to use (especially if you share one, and someone else is using it) and consultation may likewise be time-consuming (especially if your specialist has gone for coffee, or you enter into a conversation).With a dictionary, you must know how it is organized, in order to be able to use it efficiently. Consultation may involve lengthy explanation of the context, or description of the conceptual environment of the term.When searching in a dictionary, one is usually confined to a 'main entry' type of search. Also, the dictionary, being printed on paper, is of fixed format, and so may not be suited to your especial needs. Other disadvantages of a dictionary are that it is bulky, prone to wear and tear, and not particularly cheap. Failure to find a translation equivalent for a term, for example, whether as a result of dictionary look-up, or of consultation, may encourage creation of a neologism, which in many cases may hamper communication, rather than aid it.When one looks at a LDB, however, the following advantages are immediately apparent to the professional linguist. The LDB's terminological data are up-dated constantly, with new terms being inserted, obsolete terms excised, and new information being inserted on existing terminology. Access to the LDB can be very rapid, if working on-line, or with one's own subset of the LDB on floppy disc, or with a microfiche or printout tailored to one's needs. The LDB can be considered as a 'black box': the user does not need to know how the data are organized inside the machine. He is helped in his search by powerful search routines, whose existence he is again unaware of, as his contact with the LDB is through a query language which is constructed so as to be as 'user friendly' as possible, and may in fact represent a restricted subset of his own language. These powerful search routines mentioned ensure that a search is as exhaustive as possible. The computer can carry out parallel searches in a number of different data bases, in a number of different term records, and so on, comparing, correlating, combining information in order to produce not just a correct response, but, in the case where a primary search proves negative, a response that may go some way to providing the user with at least some information regarding his query. Exhaustivity and reliability, combined with the authority of a well-run and widely respected institution, will ensure that the LDB offers practical and useful services to all its users.At this stage, no decisions can be taken regarding costs to the user. However, analysis of practices in other LDBs suggests several methods of payment for services. It is to be hoped that a British LDB will offer a combination of these, suited to individual users' needs. Methods employed in other LDBs are:Sponsorship system. This would involve an annual grant in return for free use of the LDB, a system practised by NORMATERM.Subscriber system. This would involve for example a monthly sum giving credit at special rates, again a system used by NORMATERM.Ad hoc system. This involves payment on a time or unit basis, and is a method practised by all LDBs.Contributor system. Supplying data free of charge against payment or in return for services is a system offered by eg TEAM.Partnership system. This would involve supplying data in return for credit to use the LDB, and is a system practised by TEAM and TERMDOK, with a great deal of success.Most of the services of the LDB would be non-competitive, as they would not be available in any other way. On the other hand, users will consider paying for these services only if they lead to a reduction in their own costs, if they represent a necessary improvement in the quality of their work, ultimately reflected in greater income to justify this expenditure, or if they contribute in some other way to increased productivity, new products or services. If for example the job-satisfaction and productivity of translators can be increased by, say, 10%, the translator, or his employer,
Appendix:
| null | null | null | null | {
"paperhash": [
"arthern|machine_translation_and_computerised_terminology_systems_-_a_translator’s_viewpoint"
],
"title": [
"Machine translation and computerised terminology systems - a translator’s viewpoint"
],
"abstract": [
"I have been asked to give a translator's viewpoint on translating and the computer, and I would like to emphasize straightaway that what I am going to say is exactly that simply a personal impression of the present situation and future developments. While I am fortunate in being able to follow what is going on as a representative of the Council Secretariat on the Commission's \"CETIL\" Committee (Comite d'experts pour le transfer d'information entre langues europeennes) I am not speaking on behalf of the Council Secretariat to-day. Although I have only a short time available, I want to look at translating and the computer from two points of view. The first is that of a fairly large translating organization which is beginning to use a computerized terminology data base - Eurodicautom - and may become a user of machine translation. The second point of view is that of a translator and being a staff translator myself I have had to try to put myself into a freelance translator's shoes as well, in order to get a complete picture."
],
"authors": [
{
"name": [
"P. J. Arthern"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
}
]
}
],
"arxiv_id": [
null
],
"s2_corpus_id": [
"61821757"
],
"intents": [
[]
],
"isInfluential": [
false
]
} | Problem: The lack of a centralized terminological data bank in the United Kingdom hinders efficient communication, especially across language boundaries, and poses challenges in standardizing and disseminating specialized terminology.
Solution: Establishing a multifunctional, multilingual, and multidisciplinary British Linguistic Data Bank (LDB) that is widely accessible will address the issues of terminology management, facilitate efficient communication, and cater to the diverse needs of various user groups, ultimately enhancing translation services and promoting effective communication with foreign countries. | 531 | 0.00565 | null | null | null | null | null | null | null | null |
7b6cad199dd8dd4f147cd8c3157e255ecd23889e | 58367559 | null | Introduction: help from the computer | Translator and consultant, London 'THERE YOU HAVE it: the difference between the human translator and the machine', said Barbara Snell, chairman of 'Machine Aids for Translators', when she saw the cover for these proceedings. Fully automatic translation was no longer 'pie in the sky', she had said when introducing the conference: 'machine translation may not be pie on the table, but it is perhaps pie in the oven.' If intelligent youngsters were not to be put off translating as a career, therefore, translators must equip themselves with machine aids in order to fulfil their potential and make the most of one attribute which the machine would never acquire: the ability to think. The translations of the conference title, so non-literal and thoughtful, typify the human translator's approach. We render not words, but ideas. 'Machine Aids for Translators' took place in the Kensington Close Hotel, London, in November 1980, almost exactly two years after the first major meeting on 'Translating and the Computer'. 1 Whereas the 1978 event had introduced both of the twin subjects of machine translation and computer aids, the 1980 one concentrated on the latter. 'The added attraction of an exhibition,' suggested our chairman, 'is significant. It shows that we are keeping our feet firmly on the ground, concentrating on what translators can do to increase their scope with the aid of modern technology.' Tony Stiegler (Application Programming Techniques) looked at office costs and the likely effects of present and future machine aids. Robert Clark (freelance translator) reported his experience of a word processor ('terrific, but .. .'), and called for a users' group to press for standard media formats and commands. Pauline Duckitt (Pharmaceutical Society) gave a deft and thought-provoking explanation of how translators could use online information retrieval to tap distant sources of information-not only Eurodicautom, the EEC term bank, but all over the world. John McNaught (Centre for Computational Linguistics, UMIST) talked of the proposed British term bank, multilingual, multidisciplinary and sorely needed. Peter Arthern (Council of the European Communities) discussed machine aids for the large European institutions, particularly his own, in which 45 per cent of the translation output is existing text which has been amended. Finally, Professor Juan Sager (UMIST) gave an illuminating summary of the changes in the translation market. The exhibition attracted much attention, remaining open for some hours after the conference. The European Community institutions demonstrated both their much valued glossary and the Echo service, which provides online access to Eurodicautom. Weidner Communications (Utah) and Hamilton Rentals showed the Weidner machineaided translation system. Data Recall and Rank Xerox demonstrated their word processors, and Technical Translation International combined with CPT (UK) to present a 10-language integrated word-processing system, which includes input by optical scanning and output on telex or phototypesetting tape. | {
"name": [
"Lawson, Veronica"
],
"affiliation": [
null
]
} | null | null | Translating and the Computer: Machine aids for translators | 1980-11-01 | 2 | 1 | null | The great interest aroused by machines was indicated by the fact that 200 people attended the conference, 12 per cent more than in 1978, despite an unfavourable economic climate. Because of the emphasis on human needs, the group with which Aslib Technical Translation Group cooperated on this occasion was the Translators' Guild of the Institute of Linguists (the other British body for technical translators), and not the Aslib Informatics Group as in 1978. The proportion of the participants who were translators remained at 60 per cent, but the ostensible absence of machine translation from the programme seems to have caused a shift away from the universities and information science towards administrators, data processing people and, significantly, translation users. Although the strong pound had made London expensive, one third of the participants came from abroad, the same proportion as in 1978; the number of countries represented was still 12, and included the US and Africa.'Computers are putting translation on the map,' one large translation company said. 'We could never get in to see senior management before, but when we began using computer aids, they were suddenly happy to talk to us.' Nor is it only large translation services which can benefit from the increased efficiency and prestige. A word processor, for example, may still seem too dear to many translators, at £7,000 or £9,000. However, there is a cheaper alternative. Buy the parts separately-microcomputer, screen, wordprocessing program and printer-and you can have a useful wordprocessing system for as little as £1,400 or, with a better printer, £2,100. (Now, in fact, the cheapest home computers at about £100 may offer simple wordprocessing, but they do not have the versatility and robustness which translators need.) For a small additional outlay, moreover, a word processor can usually be used as a terminal to retrieve information from data banks elsewhere. The disadvantages of machine aids, such as the incompatibility of floppy discs from different word processors, should not be overlooked, but the general picture is very promising.As the machine streamlines and diversifies the translator's job, our view of that job will change considerably. 'It is only when you have worked day after day with [it] for some time that your conventional concepts break down and you begin-little by little-to glimpse the possibilities,' Robert Clark said of his word processor. This applies also to other machine aids and even, in my experience, to machine translation. Fortunately, unlike some groups, translators need not fear replacement by machines, for the spread of industrialization and the 'information explosion' have produced a huge and largely unfilled need for translations. The computer will, however, allow the translation user to be offered 'a wider range of products', as Professor Sager said, from the traditional full human translation down (far down!) to raw machine translation for information scanning. (This theme of choice was also explored the next day in a Guild seminar on 'Translation Specifications'.)Choice, of course, requires the exercise of judgment. The translator must become more versatile, the user probably more aware of the translator and his skills. Above all, choice will emphasize the difference between the human and the machine. 'Tout ce qui est mécanique peut être fait d'une façon satisfaisante par un mécanisme, tout ce qui demande les connaissances, l'expertise, l'intelligence, de 1'être humain-c'est-àdire la partie du boulot [job] qui est vraiment intéressante-tout ça reste du domaine du traducteur.' 2 Machine translation, although excluded from the conference programme, kept INTRODUCTION creeping in. It was even represented in the exhibition, for the Weidner, though marketed firmly and sensibly as a machine aid for translators, is in fact a machine translation system. MT also made an important appearance in Mr Arthern's paper. Systran, bought by the European Commission in 1976, may not yet be good enough for their translators; but the Cambridge Language Research Unit has succeeded in 'machine-translating' the program into English, and once translators can understand it they will see ways to improve it. Machine translation seems indeed to demand the insight of the professional translator, as well as linguists and computer scientists. The next in this series of conferences is to be on machine translation: the practical experience of machine translation. The speakers, notably translator/posteditors, will be people who work with MT systems in regular practical use. Until we see whether the machine is a help or a hindrance, or both, or even neither, the debate will continue. | null | null | null | null | Main paper:
:
The great interest aroused by machines was indicated by the fact that 200 people attended the conference, 12 per cent more than in 1978, despite an unfavourable economic climate. Because of the emphasis on human needs, the group with which Aslib Technical Translation Group cooperated on this occasion was the Translators' Guild of the Institute of Linguists (the other British body for technical translators), and not the Aslib Informatics Group as in 1978. The proportion of the participants who were translators remained at 60 per cent, but the ostensible absence of machine translation from the programme seems to have caused a shift away from the universities and information science towards administrators, data processing people and, significantly, translation users. Although the strong pound had made London expensive, one third of the participants came from abroad, the same proportion as in 1978; the number of countries represented was still 12, and included the US and Africa.'Computers are putting translation on the map,' one large translation company said. 'We could never get in to see senior management before, but when we began using computer aids, they were suddenly happy to talk to us.' Nor is it only large translation services which can benefit from the increased efficiency and prestige. A word processor, for example, may still seem too dear to many translators, at £7,000 or £9,000. However, there is a cheaper alternative. Buy the parts separately-microcomputer, screen, wordprocessing program and printer-and you can have a useful wordprocessing system for as little as £1,400 or, with a better printer, £2,100. (Now, in fact, the cheapest home computers at about £100 may offer simple wordprocessing, but they do not have the versatility and robustness which translators need.) For a small additional outlay, moreover, a word processor can usually be used as a terminal to retrieve information from data banks elsewhere. The disadvantages of machine aids, such as the incompatibility of floppy discs from different word processors, should not be overlooked, but the general picture is very promising.As the machine streamlines and diversifies the translator's job, our view of that job will change considerably. 'It is only when you have worked day after day with [it] for some time that your conventional concepts break down and you begin-little by little-to glimpse the possibilities,' Robert Clark said of his word processor. This applies also to other machine aids and even, in my experience, to machine translation. Fortunately, unlike some groups, translators need not fear replacement by machines, for the spread of industrialization and the 'information explosion' have produced a huge and largely unfilled need for translations. The computer will, however, allow the translation user to be offered 'a wider range of products', as Professor Sager said, from the traditional full human translation down (far down!) to raw machine translation for information scanning. (This theme of choice was also explored the next day in a Guild seminar on 'Translation Specifications'.)Choice, of course, requires the exercise of judgment. The translator must become more versatile, the user probably more aware of the translator and his skills. Above all, choice will emphasize the difference between the human and the machine. 'Tout ce qui est mécanique peut être fait d'une façon satisfaisante par un mécanisme, tout ce qui demande les connaissances, l'expertise, l'intelligence, de 1'être humain-c'est-àdire la partie du boulot [job] qui est vraiment intéressante-tout ça reste du domaine du traducteur.' 2 Machine translation, although excluded from the conference programme, kept INTRODUCTION creeping in. It was even represented in the exhibition, for the Weidner, though marketed firmly and sensibly as a machine aid for translators, is in fact a machine translation system. MT also made an important appearance in Mr Arthern's paper. Systran, bought by the European Commission in 1976, may not yet be good enough for their translators; but the Cambridge Language Research Unit has succeeded in 'machine-translating' the program into English, and once translators can understand it they will see ways to improve it. Machine translation seems indeed to demand the insight of the professional translator, as well as linguists and computer scientists. The next in this series of conferences is to be on machine translation: the practical experience of machine translation. The speakers, notably translator/posteditors, will be people who work with MT systems in regular practical use. Until we see whether the machine is a help or a hindrance, or both, or even neither, the debate will continue.
Appendix:
| null | null | null | null | {
"paperhash": [],
"title": [],
"abstract": [],
"authors": [],
"arxiv_id": [],
"s2_corpus_id": [],
"intents": [],
"isInfluential": []
} | null | 531 | 0.001883 | null | null | null | null | null | null | null | null |
f71b85703678c4587d391f6b2d2a2179213f3681 | 62140917 | null | Machine aids for translators: a review | Since translating is an office activity which, like other office activities, consists primarily of processing text, it is instructive to examine the reasons for automating text production. Most of these reasons will be equally applicable to the production of translated texts. We shall then investigate the current developments in machines and micro-chip technology which are applicable to translation. These will include voice recognition and response and optical character recognition equipment amongst others. | {
"name": [
"Stiegler, A. D."
],
"affiliation": [
null
]
} | null | null | Translating and the Computer: Machine aids for translators | 1980-11-01 | 0 | 1 | null | YOU MIGHT ASK yourselves why machine aids at all? So I'd like to talk a little about the economic justification for automating office activity. Very briefly if we look at investment in capital equipment in the United States in 1975 we find that the investment in the office was less than a tenth of what it is in the other areas of agriculture and manufacture. If we then look at how that money is spent in the office we find that 86 per cent is labour costs. The office is a very labour-intensive area. (See Fig. l ). This is evident from the figure of $2,000 of capital expenditure per annum. (See Fig. 2 ). If we then look at the costs of one of the primary activities in the office, typing an A4 page, if we look at how that cost has changed over the past 25 or 30 years, we see that it's increasing and in fact today it is around $7.00 per page, simply because it is a labour intensive activity. So what about capital expenditure? What's happening to technology, what's happening to machines? Well, everybody knows about chips by now, I suppose, having seen the programmes on the BBC and on ITV and probably on all the other channels in the world and we know that technology costs are dropping and they are dropping at a very considerable rate. For example the costs of logic, the actual processing of the data that you've got, the cost of that logic is going to drop by 1985 by 95 per cent according to these figures. According to more recent figures that I have in fact it has dropped by 95 per cent today and it will probably drop by quite a bit more by 1985.So that is why it will probably be cheaper to do it with machinery than it will be to do it with labour. Having said that, what are we doing with machinery? What we're doing with it is typing and you might well ask yourself what's wrong with a typewriter? The primary thing that's wrong with a typewriter is that it has no memory. It cannot remember what you keyed into it yesterday and the entire dialogue that you have with a typewriter is conducted on a piece of paper. It's hard copy. And as a consequence it's very hard to manipulate it. So if you've made a mistake or if you've decided to put a paragraph in a different place or if you've decided to change something in some way, you can't do it very easily with a typewriter. So what do you want to do? You want to use a word processor. Well, what's a word processor? A word processor is a device which assists in text preparation and production and it has reproduction, storage and retrieval as well as transmission potential, transmission potential being the ability to send, like a Telex, a document from one place to another along a telephone line. So that's what a modern word processor is and what makes it so much better than a typewriter? Typically it has memory to record the typed information and here's an example of difficulty of hard copy dialogue and one of them is: I'll never be able to change that word ''type' to 'typed' because that's what it should be, until I make up this slide again because I haven't preserved that information electronically. So it has memory to record typed information and typically it has a visual display unit with the operator able to see the corrections being made as they are being made.The whole activity of word processing can be summed up in five headings: it's the sum of all the activities involved in the origination, in the preparation in the production and reproduction, in the storage and retrieval and in the transmission of texts. That's what word processing is all about.So now we know what word processing is, we know a little bit about why we want to buy a word processor or might think about buying a word processor. And so what should we do when we buy one? The first thing that you should do is examine why you want to buy it and you have to define the contribution that you require of this device. You have to ask yourself: what is it that this is going to give me that I haven't already got? And if what it's going to give you is not sufficient to justify the cost-and the cost of these things is fairly high, make no mistake about it. It's not like buying an electric typewriter. An electric typewriter costs you about £2,000 and one of these things will cost you the minimum of about £8,000. So you have to define what the contribution is so that you can tell which piece of equipment to buy. The next thing you have to do is to determine how to achieve that contribution, how to obtain the benefit that you have told yourself you are going to get by introducing word processing.The first thing you have to do is analyse existing work loads. For example, in the translation field if your existing work load entails a large amount of report writing and therefore editing, that will make up the bulk of your work, you may find that in analysing that work there are places where you can use standard text or quasi standard text, so you have to analyse that to find out exactly what it is that you are actually doing. You have to draw a plan for revised work loads because what's going to happen is that now, in a typical office-and this is incidentally a general discussion that is not necessarily specific to translation. This is true of any office activity. In any office, what's going to happen with the introduction of word processing equipment is that the work flows will change. The places that work goes to and from will change. And the final thing that you have to do is to get a commitment of management if you don't happen to be your own boss. You also have to get the commitment of the people who are going to use it.Some of the ways in which you are going to be able to avoid mistakes is to look at the word processor as a system and not as a collection of individual features. The reason I say this is because when you go to a word processing exhibition, and there is one at Wembley, for example, every year and there's the Hanover fair in Germany and SICOB in Paris which all exhibit word processing equipment, you will be blinded by science. You'll walk in and you'll see these fantastic machines and people will say 'My machine has a feature to ring a bell every time you want to do something or whatever and its got bells and whistles on it which no other machine has'. You can't look at a word processor that way. You have to look at it as a system within itself and also as part of the larger system which is your office. So it has to be viewed in its context and if you feel there is a feature you need, and for translators for example, one such feature might be the ability to display diacritical marks on the screen, have it demonstrated. Don't believe anything that a manufacturer tells you! The next thing you should do, is to test the equipment with samples of your own work so that you can see it actually performing the kinds of jobs which you wish to perform. A typical mistake which people make in selecting equipment is that equipment is rejected because it doesn't have a particular feature. This could be a mistake because there are certain bits and bobs of equipment around which you think may not have a particular feature which you need but if you find one that does have that feature, it may not have a lot of other features that you really need and therefore you might select it along with this particular feature and make a mistake.Now one of the problems with word processing and the area of office automation lies in the fact that in most instances, any word processor is better than no word processor because it is going to give you the things that you are looking for. It's going to give you easier editing, it's going to give you, if you're not your own typist, faster turn around from the typist, it's going to give you higher quality output from the point of view of the number of mistakes in it and so on. It's going to give you less requirement for proof reading and all the great benefits one achieves from word processing are going to be achieved whatever word processor you use. But for your application, some word processors are better than others and your application over here may not be the same as that person's application over there. He may have a totally different problem from yours. The other problem is that when you do find a feature that the manufacturer has demonstrated, the manufacturer's demonstration may not be understood by you. You may not understand exactly what that feature he's talking about does, and you won't understand it until such time as you start using the equipment yourself.I've mentioned what word processing offers. It offers improved productivity, that is to say you'll be able to get more work done with the same number of people or the same amount of work done with fewer people typing it or even editing it. The next thing it does is to reduce the amount of time that it takes for the typist to do it. Typically when typing on a word processor when you put the original in, the amount saved is not more than 10 per cent to 15 per cent. Some people I know have claimed higher figures than that for the original typing work. It's when you come to the corrections that the turn around time becomes vastly reduced. You may have forgotten a paragraph or you may have put a paragraph in where you didn't really want it, you have to put it in somewhere else, and it requires a typist retyping two, three or maybe four pages before she gets to a break where everything fits, and then everything must be proof-read. On a word processor, this doesn't happen. The turn around times both for the typing and for your proof reading are very greatly reduced, and you get better quality. You have fewer mistakes, typically, in a document produced on a word processor than in one produced on a typewriter. If you have to send out two or three copies of something, they can all be originals and it doesn't cost you anything more than just the paper and a little bit of time on the machine to produce three, four or five originals. The thing you mustn''t forget is the reduction in author time.Finally, it turns out that in most cases, typists like it. In most installations where word processing equipment has been introduced, typists actually prefer it to their typewriters, because it's a lot easier from their point of view. One of the things for example that a typist does is that when she's typing a line towards the end of the line her typing speed actually slows down because she's anticipating the end of the line and a similar thing happens when she gets to the end of a page. On a word processor this doesn't happen because the equipment automatically allows for the margin and in most cases pages don't really mean anything to a word processor because they regard text as documents. Indeed in these terms a page is difficult to define.What sorts of word processors are there available today? The first sort that I'd like to talk about is what are called stand-alone systems. A stand-alone system is typified by a configuration such as this. As with any other word processor it has some sort of input and display device for keyboard, typically a television-like screen and a typewriter keyboard underneath it. Unfortunately most of the typewriter keyboards tend to be in the normal typewriter layout which is called 'Qwerty' by the first six characters of the top line of alphabetics, which of course is different in France and different in Germany but we're stuck with that, unfortunately, because of the people who invented the typewriter. It has logic to assist the typist with the typing process. It also has some type of storage and clearly it has a hard copy output device-a printer. That is a typical stand-alone word processor. You might find optional extra features on a stand-alone word processor-you might find an extra disc drive, for example. A floppy disc today can hold something in the neighbourhood of a half million characters and you might find you have two such units so that you could hold a million characters for immediate access. The floppy disc is a magnetic disc about the size of an old 45 rpm record. Now you will find with stand-alone equipment that you can only access 1, 2 or maybe 3 or 4 such discs and therefore immediate access at any one given time is to something in the neighbourhood of 2 million characters, so an extra disc drive can be a very advantageous thing to have. Another optional extra is the communications interface. Another might be an output to a photo typesetting device and finally you might have some kind of calculation mode to assist you in typing columns of figures.Stand-alone systems tend to meet the needs of a small office. You may find that freelance translators, for example, would be more inclined towards a stand-alone system than they would to a large shared facility system, which I'll talk about next. And they are also good for people who just want to try word processing to see if and how it can help them. Now I'll talk about shared facility systems. The facilities of the word processor are those things we've just talked about-the storage, that is the discs, the logic-both the hardware and software, that is the technology and the programto assist the typist, the printers and the communications interfaces. All of these things can be shared by a given machine. One typical example, you might have a set of work stations out here, each of which is being used by an individual typist, a central processor which is driving the work stations, ie it has the logic in it to assist the typist, and it uses a file management sub-system and a printing sub-system and this is a sort of typing pool environment of word processor, although having said that, one of these work stations could possibly be on someone's desk somewhere else, outside the typing pool area, since there is no necessity for them all to be in one place. So fundamentally that is the kind of system that is referred to as shared facility. There are other examples-see Figs. 3-7. We've talked about productivity gains and gains in the amount of time and the amount of work that can be performed. What sort of gains would you expect from word processing equipment? A shared facility system-a shared logic system as it's sometimes called-is better than a stand-alone system because it alleviates the problem of housekeeping. As I said before the discs that you have contain a fixed amount of information, say, half a million characters. If you've got more than that number of characters, and most of you probably have, most people would say that's about 100-200 pages-a page is difficult to define. However they have a limited amount of information, whereas a shared facility system, sharing that disc sub-system, might have hundreds of millions of characters of information accessible right now from any one of the terminals and therefore it alleviates the housekeeping problem of taking care of those discs and therefore you would expect the productivity to be greater than you could expect in a stand-alone system. The reason that the number in Fig. 8 (80 per cent) is lower-that is to say with the same number of people and word processors, I got 80 per cent of the amount of work done than I got done before-is because of the housekeeping problem, the problem of floppy discs and the filing of them. Now we've talked about word processors. What else is there around that might help translators? (See Fig. 9 ). The first thing that I have listed in Fig. 9 is optical character readers. There are machines around today that can read typewritten text and store it electronically somewhere, normally magnetically. The use of these things is fairly obvious in as much as if you have a typewritten script or a typewritten document and you wish to put it on to your word processor, it's much easier to get it on by an optical character reading device than it would be to re-key it.Another item is a video disc which is a disc which essentially stores images and it stores images in much the same form as a television picture and these images can be re-displayed on a television-type screen. The recording mechanism of the video disc is actually a laser and it can store millions and millions of pages on one very small gramophone record size disc; it could store complete dictionaries that you might need for translating more technical items for example.Another feature again along the same lines of storing information, storing and retrieving information like dictionaries, is the advent of what is called bubble memory which is a bubble of magnetism floating around in a silicon chip and it can be used, generated, retrieved and saved in this little tiny chip. They come about 7½ cm X 7½ cm and there may be 10,000 characters of information on a chip about that size. Voice response units are devices which respond with a human voice to an enquiry on a machine, these could be more useful to interpreters, perhaps than to translators because if you get into some technical terms which again, you may not necessarily be familiar with, the voice response unit could give you the appropriate pronunciation of that term.Then there's speech recognition. Speech recognition is an interesting affair because today what we recognize in machines is what's called discrete speech, which is to say you have approximately 20 or 30 or maybe 50 or 100 words, that is words or phrases, that the machine recognizes and can translate into an electronic form. This could be very interesting if we ever get to recognizing what is called continuous speech, translated into some kind of electronic form. This actually has been done in the United States, I understand, but one of the problems is the amount of time and the amount of processor capability that it takes up. One can read a sentence into the machine at normal reading pace and the machine will actually interpret that sentence and put it into magnetic storage in a machine readable format. It takes 75 hours to do it at present, but constant progress is being made.Finally, what about electronic phrase books? How many of you have seen the adverts recently in the Sunday Times magazine for Sharp electronic phrase books? Essentially you key in a word in a foreign language or in English and it displays on a screen the same word in English or the foreign language, according to which way you are doing it. They're really for tourists fundamentally and I mention them here because the idea is that you can take a calculator-size device and simply by removing one element of that device and putting in another element, you can translate from say English to French in one instance and English to German in another instance. Well, that's interesting but what good is that to a translator? Conceivably you can put in a technical dictionary for civil engineering because you have been translating civil engineering documents and a couple of minutes later you can pull that module out and insert the module for computers because it involves quite a different technical vocabulary to that of civil engineering. Therefore I think that these things can be of use even to the professional translator providing the manufacturers of them-people like Sharp and Nixdorf and Texas Instruments-feel that there is enough requirement for them and again it goes back to the question of the storing and retrieving of information and dictionary look-ups. | null | null | null | null | Main paper:
:
YOU MIGHT ASK yourselves why machine aids at all? So I'd like to talk a little about the economic justification for automating office activity. Very briefly if we look at investment in capital equipment in the United States in 1975 we find that the investment in the office was less than a tenth of what it is in the other areas of agriculture and manufacture. If we then look at how that money is spent in the office we find that 86 per cent is labour costs. The office is a very labour-intensive area. (See Fig. l ). This is evident from the figure of $2,000 of capital expenditure per annum. (See Fig. 2 ). If we then look at the costs of one of the primary activities in the office, typing an A4 page, if we look at how that cost has changed over the past 25 or 30 years, we see that it's increasing and in fact today it is around $7.00 per page, simply because it is a labour intensive activity. So what about capital expenditure? What's happening to technology, what's happening to machines? Well, everybody knows about chips by now, I suppose, having seen the programmes on the BBC and on ITV and probably on all the other channels in the world and we know that technology costs are dropping and they are dropping at a very considerable rate. For example the costs of logic, the actual processing of the data that you've got, the cost of that logic is going to drop by 1985 by 95 per cent according to these figures. According to more recent figures that I have in fact it has dropped by 95 per cent today and it will probably drop by quite a bit more by 1985.So that is why it will probably be cheaper to do it with machinery than it will be to do it with labour. Having said that, what are we doing with machinery? What we're doing with it is typing and you might well ask yourself what's wrong with a typewriter? The primary thing that's wrong with a typewriter is that it has no memory. It cannot remember what you keyed into it yesterday and the entire dialogue that you have with a typewriter is conducted on a piece of paper. It's hard copy. And as a consequence it's very hard to manipulate it. So if you've made a mistake or if you've decided to put a paragraph in a different place or if you've decided to change something in some way, you can't do it very easily with a typewriter. So what do you want to do? You want to use a word processor. Well, what's a word processor? A word processor is a device which assists in text preparation and production and it has reproduction, storage and retrieval as well as transmission potential, transmission potential being the ability to send, like a Telex, a document from one place to another along a telephone line. So that's what a modern word processor is and what makes it so much better than a typewriter? Typically it has memory to record the typed information and here's an example of difficulty of hard copy dialogue and one of them is: I'll never be able to change that word ''type' to 'typed' because that's what it should be, until I make up this slide again because I haven't preserved that information electronically. So it has memory to record typed information and typically it has a visual display unit with the operator able to see the corrections being made as they are being made.The whole activity of word processing can be summed up in five headings: it's the sum of all the activities involved in the origination, in the preparation in the production and reproduction, in the storage and retrieval and in the transmission of texts. That's what word processing is all about.So now we know what word processing is, we know a little bit about why we want to buy a word processor or might think about buying a word processor. And so what should we do when we buy one? The first thing that you should do is examine why you want to buy it and you have to define the contribution that you require of this device. You have to ask yourself: what is it that this is going to give me that I haven't already got? And if what it's going to give you is not sufficient to justify the cost-and the cost of these things is fairly high, make no mistake about it. It's not like buying an electric typewriter. An electric typewriter costs you about £2,000 and one of these things will cost you the minimum of about £8,000. So you have to define what the contribution is so that you can tell which piece of equipment to buy. The next thing you have to do is to determine how to achieve that contribution, how to obtain the benefit that you have told yourself you are going to get by introducing word processing.The first thing you have to do is analyse existing work loads. For example, in the translation field if your existing work load entails a large amount of report writing and therefore editing, that will make up the bulk of your work, you may find that in analysing that work there are places where you can use standard text or quasi standard text, so you have to analyse that to find out exactly what it is that you are actually doing. You have to draw a plan for revised work loads because what's going to happen is that now, in a typical office-and this is incidentally a general discussion that is not necessarily specific to translation. This is true of any office activity. In any office, what's going to happen with the introduction of word processing equipment is that the work flows will change. The places that work goes to and from will change. And the final thing that you have to do is to get a commitment of management if you don't happen to be your own boss. You also have to get the commitment of the people who are going to use it.Some of the ways in which you are going to be able to avoid mistakes is to look at the word processor as a system and not as a collection of individual features. The reason I say this is because when you go to a word processing exhibition, and there is one at Wembley, for example, every year and there's the Hanover fair in Germany and SICOB in Paris which all exhibit word processing equipment, you will be blinded by science. You'll walk in and you'll see these fantastic machines and people will say 'My machine has a feature to ring a bell every time you want to do something or whatever and its got bells and whistles on it which no other machine has'. You can't look at a word processor that way. You have to look at it as a system within itself and also as part of the larger system which is your office. So it has to be viewed in its context and if you feel there is a feature you need, and for translators for example, one such feature might be the ability to display diacritical marks on the screen, have it demonstrated. Don't believe anything that a manufacturer tells you! The next thing you should do, is to test the equipment with samples of your own work so that you can see it actually performing the kinds of jobs which you wish to perform. A typical mistake which people make in selecting equipment is that equipment is rejected because it doesn't have a particular feature. This could be a mistake because there are certain bits and bobs of equipment around which you think may not have a particular feature which you need but if you find one that does have that feature, it may not have a lot of other features that you really need and therefore you might select it along with this particular feature and make a mistake.Now one of the problems with word processing and the area of office automation lies in the fact that in most instances, any word processor is better than no word processor because it is going to give you the things that you are looking for. It's going to give you easier editing, it's going to give you, if you're not your own typist, faster turn around from the typist, it's going to give you higher quality output from the point of view of the number of mistakes in it and so on. It's going to give you less requirement for proof reading and all the great benefits one achieves from word processing are going to be achieved whatever word processor you use. But for your application, some word processors are better than others and your application over here may not be the same as that person's application over there. He may have a totally different problem from yours. The other problem is that when you do find a feature that the manufacturer has demonstrated, the manufacturer's demonstration may not be understood by you. You may not understand exactly what that feature he's talking about does, and you won't understand it until such time as you start using the equipment yourself.I've mentioned what word processing offers. It offers improved productivity, that is to say you'll be able to get more work done with the same number of people or the same amount of work done with fewer people typing it or even editing it. The next thing it does is to reduce the amount of time that it takes for the typist to do it. Typically when typing on a word processor when you put the original in, the amount saved is not more than 10 per cent to 15 per cent. Some people I know have claimed higher figures than that for the original typing work. It's when you come to the corrections that the turn around time becomes vastly reduced. You may have forgotten a paragraph or you may have put a paragraph in where you didn't really want it, you have to put it in somewhere else, and it requires a typist retyping two, three or maybe four pages before she gets to a break where everything fits, and then everything must be proof-read. On a word processor, this doesn't happen. The turn around times both for the typing and for your proof reading are very greatly reduced, and you get better quality. You have fewer mistakes, typically, in a document produced on a word processor than in one produced on a typewriter. If you have to send out two or three copies of something, they can all be originals and it doesn't cost you anything more than just the paper and a little bit of time on the machine to produce three, four or five originals. The thing you mustn''t forget is the reduction in author time.Finally, it turns out that in most cases, typists like it. In most installations where word processing equipment has been introduced, typists actually prefer it to their typewriters, because it's a lot easier from their point of view. One of the things for example that a typist does is that when she's typing a line towards the end of the line her typing speed actually slows down because she's anticipating the end of the line and a similar thing happens when she gets to the end of a page. On a word processor this doesn't happen because the equipment automatically allows for the margin and in most cases pages don't really mean anything to a word processor because they regard text as documents. Indeed in these terms a page is difficult to define.What sorts of word processors are there available today? The first sort that I'd like to talk about is what are called stand-alone systems. A stand-alone system is typified by a configuration such as this. As with any other word processor it has some sort of input and display device for keyboard, typically a television-like screen and a typewriter keyboard underneath it. Unfortunately most of the typewriter keyboards tend to be in the normal typewriter layout which is called 'Qwerty' by the first six characters of the top line of alphabetics, which of course is different in France and different in Germany but we're stuck with that, unfortunately, because of the people who invented the typewriter. It has logic to assist the typist with the typing process. It also has some type of storage and clearly it has a hard copy output device-a printer. That is a typical stand-alone word processor. You might find optional extra features on a stand-alone word processor-you might find an extra disc drive, for example. A floppy disc today can hold something in the neighbourhood of a half million characters and you might find you have two such units so that you could hold a million characters for immediate access. The floppy disc is a magnetic disc about the size of an old 45 rpm record. Now you will find with stand-alone equipment that you can only access 1, 2 or maybe 3 or 4 such discs and therefore immediate access at any one given time is to something in the neighbourhood of 2 million characters, so an extra disc drive can be a very advantageous thing to have. Another optional extra is the communications interface. Another might be an output to a photo typesetting device and finally you might have some kind of calculation mode to assist you in typing columns of figures.Stand-alone systems tend to meet the needs of a small office. You may find that freelance translators, for example, would be more inclined towards a stand-alone system than they would to a large shared facility system, which I'll talk about next. And they are also good for people who just want to try word processing to see if and how it can help them. Now I'll talk about shared facility systems. The facilities of the word processor are those things we've just talked about-the storage, that is the discs, the logic-both the hardware and software, that is the technology and the programto assist the typist, the printers and the communications interfaces. All of these things can be shared by a given machine. One typical example, you might have a set of work stations out here, each of which is being used by an individual typist, a central processor which is driving the work stations, ie it has the logic in it to assist the typist, and it uses a file management sub-system and a printing sub-system and this is a sort of typing pool environment of word processor, although having said that, one of these work stations could possibly be on someone's desk somewhere else, outside the typing pool area, since there is no necessity for them all to be in one place. So fundamentally that is the kind of system that is referred to as shared facility. There are other examples-see Figs. 3-7. We've talked about productivity gains and gains in the amount of time and the amount of work that can be performed. What sort of gains would you expect from word processing equipment? A shared facility system-a shared logic system as it's sometimes called-is better than a stand-alone system because it alleviates the problem of housekeeping. As I said before the discs that you have contain a fixed amount of information, say, half a million characters. If you've got more than that number of characters, and most of you probably have, most people would say that's about 100-200 pages-a page is difficult to define. However they have a limited amount of information, whereas a shared facility system, sharing that disc sub-system, might have hundreds of millions of characters of information accessible right now from any one of the terminals and therefore it alleviates the housekeeping problem of taking care of those discs and therefore you would expect the productivity to be greater than you could expect in a stand-alone system. The reason that the number in Fig. 8 (80 per cent) is lower-that is to say with the same number of people and word processors, I got 80 per cent of the amount of work done than I got done before-is because of the housekeeping problem, the problem of floppy discs and the filing of them. Now we've talked about word processors. What else is there around that might help translators? (See Fig. 9 ). The first thing that I have listed in Fig. 9 is optical character readers. There are machines around today that can read typewritten text and store it electronically somewhere, normally magnetically. The use of these things is fairly obvious in as much as if you have a typewritten script or a typewritten document and you wish to put it on to your word processor, it's much easier to get it on by an optical character reading device than it would be to re-key it.Another item is a video disc which is a disc which essentially stores images and it stores images in much the same form as a television picture and these images can be re-displayed on a television-type screen. The recording mechanism of the video disc is actually a laser and it can store millions and millions of pages on one very small gramophone record size disc; it could store complete dictionaries that you might need for translating more technical items for example.Another feature again along the same lines of storing information, storing and retrieving information like dictionaries, is the advent of what is called bubble memory which is a bubble of magnetism floating around in a silicon chip and it can be used, generated, retrieved and saved in this little tiny chip. They come about 7½ cm X 7½ cm and there may be 10,000 characters of information on a chip about that size. Voice response units are devices which respond with a human voice to an enquiry on a machine, these could be more useful to interpreters, perhaps than to translators because if you get into some technical terms which again, you may not necessarily be familiar with, the voice response unit could give you the appropriate pronunciation of that term.Then there's speech recognition. Speech recognition is an interesting affair because today what we recognize in machines is what's called discrete speech, which is to say you have approximately 20 or 30 or maybe 50 or 100 words, that is words or phrases, that the machine recognizes and can translate into an electronic form. This could be very interesting if we ever get to recognizing what is called continuous speech, translated into some kind of electronic form. This actually has been done in the United States, I understand, but one of the problems is the amount of time and the amount of processor capability that it takes up. One can read a sentence into the machine at normal reading pace and the machine will actually interpret that sentence and put it into magnetic storage in a machine readable format. It takes 75 hours to do it at present, but constant progress is being made.Finally, what about electronic phrase books? How many of you have seen the adverts recently in the Sunday Times magazine for Sharp electronic phrase books? Essentially you key in a word in a foreign language or in English and it displays on a screen the same word in English or the foreign language, according to which way you are doing it. They're really for tourists fundamentally and I mention them here because the idea is that you can take a calculator-size device and simply by removing one element of that device and putting in another element, you can translate from say English to French in one instance and English to German in another instance. Well, that's interesting but what good is that to a translator? Conceivably you can put in a technical dictionary for civil engineering because you have been translating civil engineering documents and a couple of minutes later you can pull that module out and insert the module for computers because it involves quite a different technical vocabulary to that of civil engineering. Therefore I think that these things can be of use even to the professional translator providing the manufacturers of them-people like Sharp and Nixdorf and Texas Instruments-feel that there is enough requirement for them and again it goes back to the question of the storing and retrieving of information and dictionary look-ups.
Appendix:
| null | null | null | null | {
"paperhash": [],
"title": [],
"abstract": [],
"authors": [],
"arxiv_id": [],
"s2_corpus_id": [],
"intents": [],
"isInfluential": []
} | null | 531 | 0.001883 | null | null | null | null | null | null | null | null |
3b42897af87708b74bd88f744fe313f70fc00b14 | 62688056 | null | Aids unlimited: the scope for machine aids in a large organization | This paper examines the types of machine aid which an suitable for use in a large translating operation such as those met in the European Community institutions. After reviewing the way in which these machine aids are already being used in large organizations, and examining the areas in which they can be of benefit to the running of the whole organization, the speaker warns of possible difficulties in introducing them. If these difficulties can be overcome, many advantages can be gained in a large organization by introducing a fully-integrated word-processing system in which all texts are stored in electronic archives and can be transmitted electronically from one work station to another, and from one country to another. The principles on which such a system could be developed can also be of immediate practical interest to the small user. | {
"name": [
"Arthern, P. J."
],
"affiliation": [
null
]
} | null | null | Translating and the Computer: Machine aids for translators | 1980-11-01 | 1 | 11 | null | THE INFERENCE OF the title of my paper seems to be that large organizations have unlimited money to throw around, and can therefore afford to install unlimited machine aids for their translators, perhaps even going as far as replacing them by a high quality fully automatic machine translation system; and certainly going beyond what a 'small user' can permit himself.The reality is quite different, at least as far as the European Community institutions are concerned, since the harsh winds of the economic recession are now blowing across Europe, and the national Treasuries are sending their axe-men to Brussels to cut the Communities' budget to the bone. For example, at the Council Secretariat we discovered with some consternation recently that our 1981 draft budget for wordprocessing equipment will not allow us to continue renting the limited amount of it which we already have, just at the moment when there are signs of a dawning acceptance of what word processing can do for us.Large organizations in which the battle-cry is going to be that of saving money are, in the nature of things, going to have very little scope for introducing machine aids for their translators unless someone or some group in the organization stands up and fights for them.The real progress now being made in introducing machine aids for translators is in areas where the object is to make money, not to save it, i.e. in large go-ahead commercial translation agencies, in two or three of the big computer companies, and in operations such as the Systran and Weidner machine (-assisted) translation systems which are being aggressively marketed.However that may be, I propose to follow the precedent which earlier speakers have set, of talking about a particular operation of which they have practical experience, rather than addressing themselves to a wider, more theoretical, attack on their subject.Accordingly, I want to describe the scope for the use of machine aids in the Secretariat of the Council of the European Communities, as I see it. As I go along, I shall be mentioning some aspects of the various types of machine aids which are available or under development. I shall also refer to the use already being made of machine aids in other organizations, and to the difficulties people have encountered in introducing them and using them.The first step in deciding what can be done about a given situation is to discover what the situation is, so I will start by describing the way in which the Council Secretariat operates, and where the Translation Department fits in.The Secretariat exists principally to service meetings of the Council of the European Communities, the Permanent Representatives Committee, and all the many working parties involved in preparing the proposals for Community legislation which are put to the Council in the form of Regulations, Decisions and Directives. These proposals all originate in the Commission, which sends them to the Council in all six Community languages-seven after 1 January 1981, when Greece accedes to the Communities. Very urgent proposals may go straight to a meeting of the Council, and may even be translated in the Council Secretariat, but the general principle is that a proposal does not even start its journey through the Council's working parties until it has been received from the Commission in all the official languages.Such non-urgent proposals start their progress through the Council Secretariat by going to a working party of national experts who subject them to minute scrutiny, not simply to protect national interests, but in a genuine effort to discover any difficulties there could be in applying the legislation, and to produce legal texts which will hold water, and can be effectively applied in all the Member States, with their widely differing legal systems.When most of the problems have been resolved, a proposal goes to the Permanent Representatives Committee, consisting of the Member States' Permanent Representatives in Brussels (they rank as Ambassadors), who meet each week and who iron out as many of the remaining difficulties as they can before sending the proposals to the Council, either for approval on the nod, or for political discussion.Once agreed by the Council, in principle, the texts in the various official languages are vetted by a 'Jurist/Linguist's Working Party' whose job it is to ensure complete concordance between the texts in the various languages before they are published in the separate language editions of the Official Journal. It is worth noting here that there are not separate national editions: the French edition, for example, is valid in France, Belgium and Luxembourg, the Dutch edition in Belgium and the Netherlands, and the English edition in Ireland as well as in the United Kingdom.It has always been the practice in the Council Secretariat for the most important JULY/AUGUST 1981working language to be French. Consequently, as a proposal moves through the working parties and then the Permanent Representatives Committee it is repeatedly amended, and the administrators who act as secretaries for all the meetings produce an amended text of the proposal after each meeting, together with the minutes of the meeting, both documents normally being drafted in French. These French texts are then translated in the Translation Department into all the other official languages and distributed to the national officials for their use at the next meeting or, at the last stage, are submitted to the Council for formal adoption and publication in the Official Journal.It will be obvious from this brief outline of how the Secretariat operates that there would be tremendous advantages in using word-processors for typing the repeatedlyamended French texts of proposals for legislation, quite apart from their possible use in the Translation Department. As a matter of fact, one Directorate in the Secretariat is now using a word processor for this purpose, with encouraging results so far.Having sketched the background to our work, we can now look more closely at how the Translation Department operates. This Department now consists of seven Divisions, since we have recently welcomed our first Greek colleagues who form the nucleus of the Greek Division which will be required to translate Council texts into Greek as from 1 January next year. The other language Divisions are French, German, Dutch, Italian, Danish and English, the latter being my own Division. For the record, we also have a capacity to translate documents into Irish, but this is a limited operation. French being the language in which most Secretariat documents originate, the French Division's work is quite different from that of the other Divisions, consisting largely of translating documents received in their own language from the various national Permanent Representations.What I am going to say now, therefore, applies to the German, Dutch, Danish, Italian and Greek Divisions, in the same way as to the English Division, since we are all basically translating in parallel from French originals. We do translate texts from other languages now and again, but the proportion is so small that it hardly affects the argument which I am going to develop. For example, in the first three months of this year texts translated from German into English amounted to 1.6 per cent of the English Division's output, from Italian 1.4 per cent, from Dutch 0.4 per cent and from Danish 0.2 per cent.The typical Division, then, consists of some 45 to 50 linguists of whom about onethird are revisers and two-thirds are translators, with two or three archivists and two or three secretaries who book work into the Division, distribute it to the translators and revisers, and see that the finished translations are sent on to the Typing Pool by the stated deadlines.Present arrangements are that translators can type their own work, can use dictating machines, or can call on typists to whom they dictate their translations on the typewriter in their office. In principle, all translations are revised by a reviser before being sent to the Typing Pool, who are entirely responsible for the accuracy and presentation of their typing. Some documents are typed on stencils, for reproduction on duplicating machines and some on plain white paper to produce originals for offset printing. Now, where do machine aids fit into this picture?Of course, when we talk about machine aids the implicit assumption is that we mean computers or word processors (and the boundary between these is getting hazier every day), but the first machine aid, introduced about 100 years ago, was of course the typewriter, which has developed in the past 20 years, via the magnetic-card typewriter, into today's word processor.Another very important machine aid has been the dictating machine. We were using dictating machines in my first translating job twenty years ago and like many of us I used to use a tape recorder with foot control for doing freelance work in the evening. Given an accurate and fast typing service, dictating machines continue to be one of the most valuable aids for fast and accurate translation.Another machine aid which we use extensively in the Council is the photo-copier. Some 45 per cent of the pages leaving my Division and all the other language divisions at the Council, except the French Division, are existing texts which have been amended to some extent, such as the substantive text of a proposal for a Council Regulation, which has been discussed and amended in a working party. Many of our translations therefore consist of what we call 'cut and stick' work in which the translator himself takes a photocopy of the earlier document, cuts out the appropriate passages and amends them by hand, filling in between with new translation. An activity survey carried out in the Division some years ago showed that translators spent 3 per cent of their effective working time in obtaining documents from the archives, 1.8 per cent in getting photocopies and 7.8 per cent in 'cutting and sticking' documents.We also have a large number of standard texts such as letters accrediting ambassadors, letters to the President of the European Parliament, letters appointing members of committees, of which we have photocopies and simply insert names and dates etc. to produce the text which goes to the Typing Pool.It is obvious that all these types of work can be handled on word processors, so we might say that for our purposes the first use of the word processor will effectively be as a combination of the typewriter and the photocopy machine. I will return to this later.A further aid not to be despised is microfilm, or microfiche. I myself have no direct experience of using this in the Council, although one Division has access to the European Communities' Official Journal on microfiche. I also understand that the Translation Service at the Department of Industry and Trade in London takes the French and English versions of the Official Journal on microfiche and have been experimenting with using microfiche instead of hard copy. Their experience may be useful, in that they found that, when they were confronted with draft amendments in French to European Community Regulations, they needed to look at the original French plus the original English, note the differences in the French and produce a new English version. Two microfiche readers were therefore set up side by side so that it was possible to compare texts.The intention, I understand, is now to use a reader-printer so that a translator can locate the relevant fiche, obtain a quick paper copy of the new pages he needs and then work at his desk. This will also have the effect of enabling 2 translators to work on the same job if urgency requires it.Returning to the Council Secretariat, the first use of electronic machine aids for translators has been in making terminology available to them. It is quite obvious that JULY/AUGUST 1981 USE IN LARGE ORGANIZATIONS with nearly fifty people producing translations into English of texts which keep coming back again and again-and, because of the pressure of deadlines, with no possibility of ensuring that documents on a given subject always go to the same person or group of people-it is absolutely essential that our terminology is placed on record as fast as new terms are met, and is made available to all linguists as soon as possible.My eyes were opened to these problems as soon as I joined the staff of the Council of the European Communities in 1962, as a translator on the first abortive negotiations for Britain's accession to the Communities. I continued my previous practice of noting the English equivalent of all the terms and expressions which caused any difficulty and this came in useful when I was subsequently appointed as reviser in charge of the small team of translators. In order to avoid two or more people wasting their time on finding their own answers to one and the same problem, I used to circulate lists of terms taken from my own notes and short typewritten text-related glossaries.When the negotiations collapsed early in 1963 I decided then and there that the computer was going to be the answer to the problem of attaining consistency of usage in any large-scale translation operation.There was now a need for English translations, even though the United Kingdom had not become a Member of the European Communities, but I was not in any position at the time to ask for a computer in order to put my principle of 'once is enough' into practice so I had to make do with file cards. These personal file cards, kept up through seven years of waiting until successful negotiations were started in 1970, became the raw material for the first edition of the French-English European Communities Glossary. All the subsequent editions of our glossary, including the current, seventh, edition, were produced by retyping the whole text each time, but with the seventh edition we entered the electronic age.Some 18 months ago the Council Secretariat finally took the step of setting up a Terminology Service, on rather unusual lines in that the terminologists were part-time volunteers who manned separate terminology bureaux in each of the language Divisions, but with a Central Secretariat which has been equipped from the beginning with word processing machines and staffed by multi-lingual secretaries capable, between them, of typing quickly and accurately in all the Community languages.The seventh edition of our French-English Glossary was the first job to be done on the word processor, an IBM machine with an ink-jet printer. The fact that all the 1,000 pages were on floppy discs greatly simplified correcting the mistakes discovered in reading the proofs and the secretaries also found the word processor physically easier to operate than the electric typewriters they had been using previously. Since we had regarded the whole operation as experimental, however, we changed some nine months ago to the Siemens equipment which we are now using. We have produced one supplement to the Glossary on the Siemens equipment, and are about to produce a second, cumulative, supplement, for which purpose it will only be necessary to type in the new terms. As these new terms are inserted in their correct alphabetical position, all the terms beyond move down, and the system re-paginates the supplement automatically.We do have a problem, in that the Siemens equipment cannot read the complete Glossary which was recorded on IBM discs, so we need to get a conversion programme VOL. 33, NO. 7/8 set up in order to enable us to produce the next edition of the complete Glossary by slotting the final cumulative supplement into the seventh edition, without retyping it.At present, then, we are using our word-processors to produce a traditional printed glossary, but we designed the layout of the glossary pages so they could easily be consulted on a visual display unit. Since the current equipment only operates with one floppy disc at a time, and we have at least one for each letter of the alphabet, it is not possible to interrogate the word processor for terms which are not on the disc which happens to be in the machine. Also, when one keys in a query the required term only comes up on the screen very slowly, as the equipment has to read each page, starting from the beginning of the disc.However, when more sophisticated word processors become available, with a much greater memory capacity, we hope to be able to expand our present bilingual system into a multi-lingual terminology system which can be consulted on word processing terminals placed in each translators' office.The first principle which we have adopted in our terminology operation in the Council Secretariat is to keep the actual terminology searching and recording inside the various language Divisions, and to have our terminologists continue to translate or revise for part of the time.The second principle is that each Division prepares its own bilingual files of translations from the language or languages which are important for it. For example, all Divisions except the French Division are concentrating at first on building up files of terms found in their own language when translating from French. These files will be printed as separate versions of the Council's European Communities Glossary in due course. The French Division has already produced an English-French Glossary which is now being printed, but which will not be available for sale at this stage, and our Terminology Service has distributed within the Community institutions, also under the European Communities Glossary title, a French-German Glossary produced by the Head of the German Translation Division at the Economic and Social Committee. It is interesting that this has almost the same layout as our own glossary.In producing the bilingual card files in our separate Divisions, on which our glossaries are based, we exchange cards with other Divisions. At first we did this by means of special multiple cards which gave a messy carbon copy, but now we have managed to programme the Siemens word-processor to print cards in any combination of two languages, with either language at the top.As an exception to our general approach of working with two languages at a time, our French Division are now scanning Community documents in French, German and English and producing lists of terms in three languages. These are being typed onto a six-language mask on the word processor, and when these terms have been typed once, we can produce bilingual cards in any combination of the three languages, and also bilingual glossaries, without further typing.As I have already hinted, we hope this way to build up a multilingual terminology system which can print out up-to-date bilingual Glossaries at the touch of a button and can also be consulted via the screen on the word processor in each translator's office.The production of a multi-lingual terminology system in this way, built up basically from bilingual terminology units, presupposes that there is an exact match of meaning in the various languages. We all know that this is very often not the case in our dayto-day linguistic experience. Terms in two different languages which do have the same meaning in one context very often have other areas or shades of meaning which do not coincide. However, within the European Communities, and certainly in legislative and legal texts within the Communities, there must of necessity be exact equivalence for a given concept across all the languages.When this realization is combined with the situation which we have in the Council Secretariat, i.e. that at least 95 per cent of the texts in the various languages originate from a common language-French-we do have the possibility of automatically producing a multi-lingual terminology system from separate discrete bilingual files all based on French as pivot language, provided three conditions are met when recording individual terminology units. These are:(1) The form of the French expression must be identical in all the bilingual units.Otherwise, a computer or word processor will not recognize the units as being equivalent. (2) The concept expressed by the French term must be exactly the same in all the language combinations. (3) The context of the concept must be identical for all the language combinations.For example there may be a concept which is identical in two contexts, but the actual terms used in any given language may not be identical. For example, the French term 'techniques d'abattage' is 'coal-getting techniques' in coal-mining, but 'stoping techniques' in metal-ore mining.The French-English version of our European Communities Glossary is, incidentally, on sale at Her Majesty's Stationery Office and some booksellers, price £7.60. The cumulative supplements are not put on the market, but are distributed only within the Community institutions and to Government Departments, University Language Courses, and European Community Depositary Libraries. When it becomes possible to interrogate data bases via the Prestel system in the United Kingdom we will consider making the Glossary available on this service.In addition to our own terminology system, the Council Secretariat also has a computer terminal in our central Terminology Secretariat which is permanently connected to the Commission's computerized terminology database, 'Eurodicautom'. This system is also multi-lingual; it was originally designed on rather different lines from our own Glossary, which means that it tended to overwhelm the user with superfluous information. The latest software, which is not yet available on the terminal in the Exhibition, does go a long way to giving the 'translator's package' of basic information, for which I have been pleading for some years, so perhaps the various systems are converging towards a basic common denominator of what the translator really needs.With the proliferation of word processors making it possible for anyone who has the necessary money to set up his own 'computerized' terminology data base, the dream of exchanging terms automatically between one term bank and another is fast becoming unachievable, unless someone can produce a standard layout and standard technology very quickly indeed.Although term banks were with us some time before sophisticated word processors became generally available, and had already become an absolutely indispensable factor in the operation of some large translation organizations, such as the Bundessprachenamt in West Germany, it is the advent of the word processor which is going to affect all translators radically in the very near future. In fact, if the necessary funds can be found, our next step in the Council Secretariat will be trials with a word processor in my own Division to see what advantages it can offer in producing the final typed texts of translations, and also to discover any disadvantages as compared with our current methods of working. I envisage setting up a small team of volunteer translators, revisers and secretaries, to experiment with various ways of using the equipment. At first, we will produce translations on the word processor in parallel with translating the same texts elsewhere in the Division, so that if anything goes wrong, translations are not held up. This is a vital consideration in attempting to introduce new equipment. As the bugs are ironed out, the new system can gradually replace the old methods, and be extended to cover new areas, if it does really prove to have advantages and to be cost-effective as defined in the particular organization's own terms.You may be surprised that I have got so far without mentioning machine translation or machine-assisted translation, as it is generally called nowadays. This is partly because the Council Secretariat will certainly never go in for developing its own machineassisted translation system, and partly because I am trying to proceed logically.The Commission of the European Communities has in fact done a good deal of work on machine translation under the first action plan for the transfer of information between languages which is sponsored by DG XIII, the Directorate General for the Information Market and Innovation, in Luxembourg, and is continuing its efforts under the second action plan.Some years ago the Commission bought the use of the American commercial machine-translation system 'Systran' and, together with its originators, did a considerable amount of work on developing its capacity in English-to-French, French-to-English and Italian-to-English translation.The results have not so far proved adequate for use in the Commission's own Translation Department, largely because too much post-editing (or revision) was required, but the Commission plans to offer a service of Systran translations on demand from databases on the Euronet network. There is also a growing interest in the possibility of using Systran for translating patent specifications.What did become evident during the Commission's development work was that any operational use of machine translation in Community translating operations would have to take place in the framework of a system employing word processors. So, even although it is not at present envisaged that machine translation can be employed in the Commission's own translation operations, DG XIII are going ahead with the installation of a Wang word processing system linked to the Siemens computer on which Systran is being run, in order to develop such a combined system.During the development of Systran, the Commission has also sponsored a remarkable breakthrough in machine translation, which was thought to be impossible. Margaret Masterman and Bob Smith of the Cambridge Language Research Unit have succeeded, under a contract given to them by the Commission, in producing a machine-translation programme which is capable of translating Systran's own machine-translation pro-is enough' principle. Now that we have reached the stage of recording the correct equivalents of individual terms, and making them available electronically, so as to achieve consistency of terminology, and now commercial pressures are causing manufacturers to offer us cheaper and cheaper word processors with bigger and bigger memories, why not go the whole hog and store all the translations we have ever done in the word processor's memory? It must in fact be possible to produce a programme which would enable the word processor to 'remember' whether any part of a new text typed into it had already been translated, and to fetch this part, together with the translation which had already been made, and display it on the screen or print it out, automatically.In the Council Secretariat, for example, all typewriters could be replaced by work stations with their own word processing capacity, but all connected to a central computer with a very large memory which would store all the texts produced in the Council Secretariat, in all the official languages. Any new text would be typed into a word processing station, and as it was being typed, the system would check this text against the earlier texts stored in its memory, and would locate any part of it which had already been stored in the memory, together with its translation into all the other official languages. The system would also need to locate existing passages which had been amended before being incorporated into the new document.In this way, the system would produce partial translations of new documents in all the official languages, which could be printed out and given to the various translators for completion. One advantage over machine translation proper would be that all the passages so retrieved would be grammatically correct. In effect, we should be operating an electronic 'cut and stick' process which would, according to my calculations, save at least 15 per cent of the time which translators now employ in effectively producing translations.When the translations were completed, the texts in all the languages would be typed into the system for printing by whatever means was being employed, and at the same time would be available in the central electronic archives to serve as a basis for the translation of subsequent texts.Once a text was in the system, it could also be transmitted electronically to word processors in the Member States' capitals, and printed there for local distribution, so as to gain a day in the distribution of documents and avoid the need to physically despatch so many tons of paper each year from Brussels.Looking even further, it would be possible to service Conferences held in towns away from Brussels by remote translation, originals and translations being rapidly transmitted to and fro via the telephone network, or other data-transmission networks now being developed.With this development, we shall have come full circle again to the 'small user', because each of the individual translators, revisers or post-editors working on such an integrated network in a large organization will be in exactly the same position as a 'small user'-a lone freelance, or translators in a small commercial or government translation department-who could communicate with other small users and with large organizations, over the public data-transmission network.All that is required is that each individual translator, either working on his own, or in an organization whether large or small, has a word processor terminal with access JULY/AUGUST 1981 USE IN LARGE ORGANIZATIONS to a large enough memory to store all the translations he does, and connected to all other compatible translating terminals by the public data network. Anyone on the network will be able to telephone anyone else and his own word processor will then automatically check whether the text he has been asked to translate already exists in the second word processor's memory. If it does, it can be transmitted to the first word processor almost instantaneously and printed out at once, or used as the basis for further word processing operations. It would also be possible for one word processor to obtain terminology from another word processor's memory in the same way.It would of course be necessary to set up a system of charges for information supplied in this way, but this should present no problem in this age of electronic accounting. Payments could quite simply be charged to your credit-card account!To turn this dream into reality, a lot of hard work remains to be done, and it should be done just as quickly as possible if we are to get the manufacturers to understand the problems involved, and to market a 'translator's word processor' which will be as ubiquitous and as compatible as the telephone.Perhaps there is an opportunity here for the European translating profession to work urgently with manufacturers in order to produce the specifications for a universal text-communicating system, and to place it on the market. | null | null | null | null | Main paper:
:
THE INFERENCE OF the title of my paper seems to be that large organizations have unlimited money to throw around, and can therefore afford to install unlimited machine aids for their translators, perhaps even going as far as replacing them by a high quality fully automatic machine translation system; and certainly going beyond what a 'small user' can permit himself.The reality is quite different, at least as far as the European Community institutions are concerned, since the harsh winds of the economic recession are now blowing across Europe, and the national Treasuries are sending their axe-men to Brussels to cut the Communities' budget to the bone. For example, at the Council Secretariat we discovered with some consternation recently that our 1981 draft budget for wordprocessing equipment will not allow us to continue renting the limited amount of it which we already have, just at the moment when there are signs of a dawning acceptance of what word processing can do for us.Large organizations in which the battle-cry is going to be that of saving money are, in the nature of things, going to have very little scope for introducing machine aids for their translators unless someone or some group in the organization stands up and fights for them.The real progress now being made in introducing machine aids for translators is in areas where the object is to make money, not to save it, i.e. in large go-ahead commercial translation agencies, in two or three of the big computer companies, and in operations such as the Systran and Weidner machine (-assisted) translation systems which are being aggressively marketed.However that may be, I propose to follow the precedent which earlier speakers have set, of talking about a particular operation of which they have practical experience, rather than addressing themselves to a wider, more theoretical, attack on their subject.Accordingly, I want to describe the scope for the use of machine aids in the Secretariat of the Council of the European Communities, as I see it. As I go along, I shall be mentioning some aspects of the various types of machine aids which are available or under development. I shall also refer to the use already being made of machine aids in other organizations, and to the difficulties people have encountered in introducing them and using them.The first step in deciding what can be done about a given situation is to discover what the situation is, so I will start by describing the way in which the Council Secretariat operates, and where the Translation Department fits in.The Secretariat exists principally to service meetings of the Council of the European Communities, the Permanent Representatives Committee, and all the many working parties involved in preparing the proposals for Community legislation which are put to the Council in the form of Regulations, Decisions and Directives. These proposals all originate in the Commission, which sends them to the Council in all six Community languages-seven after 1 January 1981, when Greece accedes to the Communities. Very urgent proposals may go straight to a meeting of the Council, and may even be translated in the Council Secretariat, but the general principle is that a proposal does not even start its journey through the Council's working parties until it has been received from the Commission in all the official languages.Such non-urgent proposals start their progress through the Council Secretariat by going to a working party of national experts who subject them to minute scrutiny, not simply to protect national interests, but in a genuine effort to discover any difficulties there could be in applying the legislation, and to produce legal texts which will hold water, and can be effectively applied in all the Member States, with their widely differing legal systems.When most of the problems have been resolved, a proposal goes to the Permanent Representatives Committee, consisting of the Member States' Permanent Representatives in Brussels (they rank as Ambassadors), who meet each week and who iron out as many of the remaining difficulties as they can before sending the proposals to the Council, either for approval on the nod, or for political discussion.Once agreed by the Council, in principle, the texts in the various official languages are vetted by a 'Jurist/Linguist's Working Party' whose job it is to ensure complete concordance between the texts in the various languages before they are published in the separate language editions of the Official Journal. It is worth noting here that there are not separate national editions: the French edition, for example, is valid in France, Belgium and Luxembourg, the Dutch edition in Belgium and the Netherlands, and the English edition in Ireland as well as in the United Kingdom.It has always been the practice in the Council Secretariat for the most important JULY/AUGUST 1981working language to be French. Consequently, as a proposal moves through the working parties and then the Permanent Representatives Committee it is repeatedly amended, and the administrators who act as secretaries for all the meetings produce an amended text of the proposal after each meeting, together with the minutes of the meeting, both documents normally being drafted in French. These French texts are then translated in the Translation Department into all the other official languages and distributed to the national officials for their use at the next meeting or, at the last stage, are submitted to the Council for formal adoption and publication in the Official Journal.It will be obvious from this brief outline of how the Secretariat operates that there would be tremendous advantages in using word-processors for typing the repeatedlyamended French texts of proposals for legislation, quite apart from their possible use in the Translation Department. As a matter of fact, one Directorate in the Secretariat is now using a word processor for this purpose, with encouraging results so far.Having sketched the background to our work, we can now look more closely at how the Translation Department operates. This Department now consists of seven Divisions, since we have recently welcomed our first Greek colleagues who form the nucleus of the Greek Division which will be required to translate Council texts into Greek as from 1 January next year. The other language Divisions are French, German, Dutch, Italian, Danish and English, the latter being my own Division. For the record, we also have a capacity to translate documents into Irish, but this is a limited operation. French being the language in which most Secretariat documents originate, the French Division's work is quite different from that of the other Divisions, consisting largely of translating documents received in their own language from the various national Permanent Representations.What I am going to say now, therefore, applies to the German, Dutch, Danish, Italian and Greek Divisions, in the same way as to the English Division, since we are all basically translating in parallel from French originals. We do translate texts from other languages now and again, but the proportion is so small that it hardly affects the argument which I am going to develop. For example, in the first three months of this year texts translated from German into English amounted to 1.6 per cent of the English Division's output, from Italian 1.4 per cent, from Dutch 0.4 per cent and from Danish 0.2 per cent.The typical Division, then, consists of some 45 to 50 linguists of whom about onethird are revisers and two-thirds are translators, with two or three archivists and two or three secretaries who book work into the Division, distribute it to the translators and revisers, and see that the finished translations are sent on to the Typing Pool by the stated deadlines.Present arrangements are that translators can type their own work, can use dictating machines, or can call on typists to whom they dictate their translations on the typewriter in their office. In principle, all translations are revised by a reviser before being sent to the Typing Pool, who are entirely responsible for the accuracy and presentation of their typing. Some documents are typed on stencils, for reproduction on duplicating machines and some on plain white paper to produce originals for offset printing. Now, where do machine aids fit into this picture?Of course, when we talk about machine aids the implicit assumption is that we mean computers or word processors (and the boundary between these is getting hazier every day), but the first machine aid, introduced about 100 years ago, was of course the typewriter, which has developed in the past 20 years, via the magnetic-card typewriter, into today's word processor.Another very important machine aid has been the dictating machine. We were using dictating machines in my first translating job twenty years ago and like many of us I used to use a tape recorder with foot control for doing freelance work in the evening. Given an accurate and fast typing service, dictating machines continue to be one of the most valuable aids for fast and accurate translation.Another machine aid which we use extensively in the Council is the photo-copier. Some 45 per cent of the pages leaving my Division and all the other language divisions at the Council, except the French Division, are existing texts which have been amended to some extent, such as the substantive text of a proposal for a Council Regulation, which has been discussed and amended in a working party. Many of our translations therefore consist of what we call 'cut and stick' work in which the translator himself takes a photocopy of the earlier document, cuts out the appropriate passages and amends them by hand, filling in between with new translation. An activity survey carried out in the Division some years ago showed that translators spent 3 per cent of their effective working time in obtaining documents from the archives, 1.8 per cent in getting photocopies and 7.8 per cent in 'cutting and sticking' documents.We also have a large number of standard texts such as letters accrediting ambassadors, letters to the President of the European Parliament, letters appointing members of committees, of which we have photocopies and simply insert names and dates etc. to produce the text which goes to the Typing Pool.It is obvious that all these types of work can be handled on word processors, so we might say that for our purposes the first use of the word processor will effectively be as a combination of the typewriter and the photocopy machine. I will return to this later.A further aid not to be despised is microfilm, or microfiche. I myself have no direct experience of using this in the Council, although one Division has access to the European Communities' Official Journal on microfiche. I also understand that the Translation Service at the Department of Industry and Trade in London takes the French and English versions of the Official Journal on microfiche and have been experimenting with using microfiche instead of hard copy. Their experience may be useful, in that they found that, when they were confronted with draft amendments in French to European Community Regulations, they needed to look at the original French plus the original English, note the differences in the French and produce a new English version. Two microfiche readers were therefore set up side by side so that it was possible to compare texts.The intention, I understand, is now to use a reader-printer so that a translator can locate the relevant fiche, obtain a quick paper copy of the new pages he needs and then work at his desk. This will also have the effect of enabling 2 translators to work on the same job if urgency requires it.Returning to the Council Secretariat, the first use of electronic machine aids for translators has been in making terminology available to them. It is quite obvious that JULY/AUGUST 1981 USE IN LARGE ORGANIZATIONS with nearly fifty people producing translations into English of texts which keep coming back again and again-and, because of the pressure of deadlines, with no possibility of ensuring that documents on a given subject always go to the same person or group of people-it is absolutely essential that our terminology is placed on record as fast as new terms are met, and is made available to all linguists as soon as possible.My eyes were opened to these problems as soon as I joined the staff of the Council of the European Communities in 1962, as a translator on the first abortive negotiations for Britain's accession to the Communities. I continued my previous practice of noting the English equivalent of all the terms and expressions which caused any difficulty and this came in useful when I was subsequently appointed as reviser in charge of the small team of translators. In order to avoid two or more people wasting their time on finding their own answers to one and the same problem, I used to circulate lists of terms taken from my own notes and short typewritten text-related glossaries.When the negotiations collapsed early in 1963 I decided then and there that the computer was going to be the answer to the problem of attaining consistency of usage in any large-scale translation operation.There was now a need for English translations, even though the United Kingdom had not become a Member of the European Communities, but I was not in any position at the time to ask for a computer in order to put my principle of 'once is enough' into practice so I had to make do with file cards. These personal file cards, kept up through seven years of waiting until successful negotiations were started in 1970, became the raw material for the first edition of the French-English European Communities Glossary. All the subsequent editions of our glossary, including the current, seventh, edition, were produced by retyping the whole text each time, but with the seventh edition we entered the electronic age.Some 18 months ago the Council Secretariat finally took the step of setting up a Terminology Service, on rather unusual lines in that the terminologists were part-time volunteers who manned separate terminology bureaux in each of the language Divisions, but with a Central Secretariat which has been equipped from the beginning with word processing machines and staffed by multi-lingual secretaries capable, between them, of typing quickly and accurately in all the Community languages.The seventh edition of our French-English Glossary was the first job to be done on the word processor, an IBM machine with an ink-jet printer. The fact that all the 1,000 pages were on floppy discs greatly simplified correcting the mistakes discovered in reading the proofs and the secretaries also found the word processor physically easier to operate than the electric typewriters they had been using previously. Since we had regarded the whole operation as experimental, however, we changed some nine months ago to the Siemens equipment which we are now using. We have produced one supplement to the Glossary on the Siemens equipment, and are about to produce a second, cumulative, supplement, for which purpose it will only be necessary to type in the new terms. As these new terms are inserted in their correct alphabetical position, all the terms beyond move down, and the system re-paginates the supplement automatically.We do have a problem, in that the Siemens equipment cannot read the complete Glossary which was recorded on IBM discs, so we need to get a conversion programme VOL. 33, NO. 7/8 set up in order to enable us to produce the next edition of the complete Glossary by slotting the final cumulative supplement into the seventh edition, without retyping it.At present, then, we are using our word-processors to produce a traditional printed glossary, but we designed the layout of the glossary pages so they could easily be consulted on a visual display unit. Since the current equipment only operates with one floppy disc at a time, and we have at least one for each letter of the alphabet, it is not possible to interrogate the word processor for terms which are not on the disc which happens to be in the machine. Also, when one keys in a query the required term only comes up on the screen very slowly, as the equipment has to read each page, starting from the beginning of the disc.However, when more sophisticated word processors become available, with a much greater memory capacity, we hope to be able to expand our present bilingual system into a multi-lingual terminology system which can be consulted on word processing terminals placed in each translators' office.The first principle which we have adopted in our terminology operation in the Council Secretariat is to keep the actual terminology searching and recording inside the various language Divisions, and to have our terminologists continue to translate or revise for part of the time.The second principle is that each Division prepares its own bilingual files of translations from the language or languages which are important for it. For example, all Divisions except the French Division are concentrating at first on building up files of terms found in their own language when translating from French. These files will be printed as separate versions of the Council's European Communities Glossary in due course. The French Division has already produced an English-French Glossary which is now being printed, but which will not be available for sale at this stage, and our Terminology Service has distributed within the Community institutions, also under the European Communities Glossary title, a French-German Glossary produced by the Head of the German Translation Division at the Economic and Social Committee. It is interesting that this has almost the same layout as our own glossary.In producing the bilingual card files in our separate Divisions, on which our glossaries are based, we exchange cards with other Divisions. At first we did this by means of special multiple cards which gave a messy carbon copy, but now we have managed to programme the Siemens word-processor to print cards in any combination of two languages, with either language at the top.As an exception to our general approach of working with two languages at a time, our French Division are now scanning Community documents in French, German and English and producing lists of terms in three languages. These are being typed onto a six-language mask on the word processor, and when these terms have been typed once, we can produce bilingual cards in any combination of the three languages, and also bilingual glossaries, without further typing.As I have already hinted, we hope this way to build up a multilingual terminology system which can print out up-to-date bilingual Glossaries at the touch of a button and can also be consulted via the screen on the word processor in each translator's office.The production of a multi-lingual terminology system in this way, built up basically from bilingual terminology units, presupposes that there is an exact match of meaning in the various languages. We all know that this is very often not the case in our dayto-day linguistic experience. Terms in two different languages which do have the same meaning in one context very often have other areas or shades of meaning which do not coincide. However, within the European Communities, and certainly in legislative and legal texts within the Communities, there must of necessity be exact equivalence for a given concept across all the languages.When this realization is combined with the situation which we have in the Council Secretariat, i.e. that at least 95 per cent of the texts in the various languages originate from a common language-French-we do have the possibility of automatically producing a multi-lingual terminology system from separate discrete bilingual files all based on French as pivot language, provided three conditions are met when recording individual terminology units. These are:(1) The form of the French expression must be identical in all the bilingual units.Otherwise, a computer or word processor will not recognize the units as being equivalent. (2) The concept expressed by the French term must be exactly the same in all the language combinations. (3) The context of the concept must be identical for all the language combinations.For example there may be a concept which is identical in two contexts, but the actual terms used in any given language may not be identical. For example, the French term 'techniques d'abattage' is 'coal-getting techniques' in coal-mining, but 'stoping techniques' in metal-ore mining.The French-English version of our European Communities Glossary is, incidentally, on sale at Her Majesty's Stationery Office and some booksellers, price £7.60. The cumulative supplements are not put on the market, but are distributed only within the Community institutions and to Government Departments, University Language Courses, and European Community Depositary Libraries. When it becomes possible to interrogate data bases via the Prestel system in the United Kingdom we will consider making the Glossary available on this service.In addition to our own terminology system, the Council Secretariat also has a computer terminal in our central Terminology Secretariat which is permanently connected to the Commission's computerized terminology database, 'Eurodicautom'. This system is also multi-lingual; it was originally designed on rather different lines from our own Glossary, which means that it tended to overwhelm the user with superfluous information. The latest software, which is not yet available on the terminal in the Exhibition, does go a long way to giving the 'translator's package' of basic information, for which I have been pleading for some years, so perhaps the various systems are converging towards a basic common denominator of what the translator really needs.With the proliferation of word processors making it possible for anyone who has the necessary money to set up his own 'computerized' terminology data base, the dream of exchanging terms automatically between one term bank and another is fast becoming unachievable, unless someone can produce a standard layout and standard technology very quickly indeed.Although term banks were with us some time before sophisticated word processors became generally available, and had already become an absolutely indispensable factor in the operation of some large translation organizations, such as the Bundessprachenamt in West Germany, it is the advent of the word processor which is going to affect all translators radically in the very near future. In fact, if the necessary funds can be found, our next step in the Council Secretariat will be trials with a word processor in my own Division to see what advantages it can offer in producing the final typed texts of translations, and also to discover any disadvantages as compared with our current methods of working. I envisage setting up a small team of volunteer translators, revisers and secretaries, to experiment with various ways of using the equipment. At first, we will produce translations on the word processor in parallel with translating the same texts elsewhere in the Division, so that if anything goes wrong, translations are not held up. This is a vital consideration in attempting to introduce new equipment. As the bugs are ironed out, the new system can gradually replace the old methods, and be extended to cover new areas, if it does really prove to have advantages and to be cost-effective as defined in the particular organization's own terms.You may be surprised that I have got so far without mentioning machine translation or machine-assisted translation, as it is generally called nowadays. This is partly because the Council Secretariat will certainly never go in for developing its own machineassisted translation system, and partly because I am trying to proceed logically.The Commission of the European Communities has in fact done a good deal of work on machine translation under the first action plan for the transfer of information between languages which is sponsored by DG XIII, the Directorate General for the Information Market and Innovation, in Luxembourg, and is continuing its efforts under the second action plan.Some years ago the Commission bought the use of the American commercial machine-translation system 'Systran' and, together with its originators, did a considerable amount of work on developing its capacity in English-to-French, French-to-English and Italian-to-English translation.The results have not so far proved adequate for use in the Commission's own Translation Department, largely because too much post-editing (or revision) was required, but the Commission plans to offer a service of Systran translations on demand from databases on the Euronet network. There is also a growing interest in the possibility of using Systran for translating patent specifications.What did become evident during the Commission's development work was that any operational use of machine translation in Community translating operations would have to take place in the framework of a system employing word processors. So, even although it is not at present envisaged that machine translation can be employed in the Commission's own translation operations, DG XIII are going ahead with the installation of a Wang word processing system linked to the Siemens computer on which Systran is being run, in order to develop such a combined system.During the development of Systran, the Commission has also sponsored a remarkable breakthrough in machine translation, which was thought to be impossible. Margaret Masterman and Bob Smith of the Cambridge Language Research Unit have succeeded, under a contract given to them by the Commission, in producing a machine-translation programme which is capable of translating Systran's own machine-translation pro-is enough' principle. Now that we have reached the stage of recording the correct equivalents of individual terms, and making them available electronically, so as to achieve consistency of terminology, and now commercial pressures are causing manufacturers to offer us cheaper and cheaper word processors with bigger and bigger memories, why not go the whole hog and store all the translations we have ever done in the word processor's memory? It must in fact be possible to produce a programme which would enable the word processor to 'remember' whether any part of a new text typed into it had already been translated, and to fetch this part, together with the translation which had already been made, and display it on the screen or print it out, automatically.In the Council Secretariat, for example, all typewriters could be replaced by work stations with their own word processing capacity, but all connected to a central computer with a very large memory which would store all the texts produced in the Council Secretariat, in all the official languages. Any new text would be typed into a word processing station, and as it was being typed, the system would check this text against the earlier texts stored in its memory, and would locate any part of it which had already been stored in the memory, together with its translation into all the other official languages. The system would also need to locate existing passages which had been amended before being incorporated into the new document.In this way, the system would produce partial translations of new documents in all the official languages, which could be printed out and given to the various translators for completion. One advantage over machine translation proper would be that all the passages so retrieved would be grammatically correct. In effect, we should be operating an electronic 'cut and stick' process which would, according to my calculations, save at least 15 per cent of the time which translators now employ in effectively producing translations.When the translations were completed, the texts in all the languages would be typed into the system for printing by whatever means was being employed, and at the same time would be available in the central electronic archives to serve as a basis for the translation of subsequent texts.Once a text was in the system, it could also be transmitted electronically to word processors in the Member States' capitals, and printed there for local distribution, so as to gain a day in the distribution of documents and avoid the need to physically despatch so many tons of paper each year from Brussels.Looking even further, it would be possible to service Conferences held in towns away from Brussels by remote translation, originals and translations being rapidly transmitted to and fro via the telephone network, or other data-transmission networks now being developed.With this development, we shall have come full circle again to the 'small user', because each of the individual translators, revisers or post-editors working on such an integrated network in a large organization will be in exactly the same position as a 'small user'-a lone freelance, or translators in a small commercial or government translation department-who could communicate with other small users and with large organizations, over the public data-transmission network.All that is required is that each individual translator, either working on his own, or in an organization whether large or small, has a word processor terminal with access JULY/AUGUST 1981 USE IN LARGE ORGANIZATIONS to a large enough memory to store all the translations he does, and connected to all other compatible translating terminals by the public data network. Anyone on the network will be able to telephone anyone else and his own word processor will then automatically check whether the text he has been asked to translate already exists in the second word processor's memory. If it does, it can be transmitted to the first word processor almost instantaneously and printed out at once, or used as the basis for further word processing operations. It would also be possible for one word processor to obtain terminology from another word processor's memory in the same way.It would of course be necessary to set up a system of charges for information supplied in this way, but this should present no problem in this age of electronic accounting. Payments could quite simply be charged to your credit-card account!To turn this dream into reality, a lot of hard work remains to be done, and it should be done just as quickly as possible if we are to get the manufacturers to understand the problems involved, and to market a 'translator's word processor' which will be as ubiquitous and as compatible as the telephone.Perhaps there is an opportunity here for the European translating profession to work urgently with manufacturers in order to produce the specifications for a universal text-communicating system, and to place it on the market.
Appendix:
| null | null | null | null | {
"paperhash": [],
"title": [],
"abstract": [],
"authors": [],
"arxiv_id": [],
"s2_corpus_id": [],
"intents": [],
"isInfluential": []
} | null | 531 | 0.020716 | null | null | null | null | null | null | null | null |
d90aa2d883d3150f3ad9eac67052fdad4b068336 | 60736592 | null | Translating and online | Information can be retrieved by direct interrogation of a remote computer by means of a keyboard terminal and a telephone. The advantages of such an online system are fast access to large quantities of data and the opportunity to refine the enquiry by conversing with the computer. At present, data bases can be used to determine if a translation of a document, or an alternative, already exists. They can assist with translating particular words or phrases, especially in new subject areas. In the future, online systems may be exploited to produce more sophisticated aids, reflecting the structure of language. THE ENGLISH WORD is 'online' and its French equivalent 'conversationnel'. Although the main topic of this paper is how information transfer via online searching can enhance translation, I cannot resist the opportunity to start with this example of the reverse case-how translation may enhance information transfer. 'Online' merely states that you are in direct contact with a computer; 'conversationnel' implies that two-way communication is possible between you and the computer. The latter is nearer the truth. Online searching can be defined as the process of interactively searching for and retrieving information by computer from a machine-readable database. 1 Why should interaction be a desirable feature of computer use? Our chairman remarked during the 1978 seminar on 'Translating and the Computer' that 'if translators are to coexist with computers, we must become actively involved in directing their uses, let us be the masters and they the tools'. 2 Using the computer in an online mode helps us to achieve this control. It is well known that computers will only do what they are told. If an enquiry has to be formulated in a very complex language before it can be put to the computer, and this formulation involves computer scientists and a long delay, then we, non-computer scientists, do not have control of computers. Online searching allows us to put the questions in simple language. If you happen to phrase the question in the wrong way or if the computer does not have the information you require it will tell you so, immediately and, more or less, politely. You are then in a position to correct the question. Online searching is understandable and therefore controllable. In this paper I shall be asking the following questions: what is online? why is it useful? how does it help translators? where can you obtain access from? and how much does it cost? | {
"name": [
"Duckitt, Pauline"
],
"affiliation": [
null
]
} | null | null | Translating and the Computer: Machine aids for translators | 1980-11-01 | 6 | 2 | null | THE ENGLISH WORD is 'online' and its French equivalent 'conversationnel'. Although the main topic of this paper is how information transfer via online searching can enhance translation, I cannot resist the opportunity to start with this example of the reverse case-how translation may enhance information transfer. 'Online' merely states that you are in direct contact with a computer; 'conversationnel' implies that two-way communication is possible between you and the computer. The latter is nearer the truth. Online searching can be defined as the process of interactively searching for and retrieving information by computer from a machine-readable database. 1 Why should interaction be a desirable feature of computer use? Our chairman remarked during the 1978 seminar on 'Translating and the Computer' that 'if translators are to coexist with computers, we must become actively involved in directing their uses, let us be the masters and they the tools'. 2 Using the computer in an online mode helps us to achieve this control. It is well known that computers will only do what they are told. If an enquiry has to be formulated in a very complex language before it can be put to the computer, and this formulation involves computer scientists and a long delay, then we, non-computer scientists, do not have control of computers. Online searching allows us to put the questions in simple language. If you happen to phrase the question in the wrong way or if the computer does not have the information you require it will tell you so, immediately and, more or less, politely. You are then in a position to correct the question. Online searching is understandable and therefore controllable.In this paper I shall be asking the following questions: what is online? why is it useful? how does it help translators? where can you obtain access from? and how much does it cost?For the benefit of those who are not familiar with the concepts underlying online information retrieval I will give a brief explanation and hope that those to whom this may appear simplistic will bear with me.The user of the online system constructs an enquiry to be matched against the collection of information in a database. Some people have drawn the distinction between databases, containing bibliographic references, and data banks, containing non-bibliographic information. For our purposes today, I will not bother about this distinction and will refer solely to databases, meaning by this structured collections of any kind of information in machine readable form. Depending on the response to the enquiry, the user reformats his enquiry or modifies its scope until he is satisfied with the information retrieved.Communication with the database takes place via a keyboard terminal with a screen and/or a printer to display the interaction and the results. The data bases exist on computer discs made available by search system suppliers. Typically, each search system supplier will have a computer and will offer access to a number of data bases. The interposition of a telecommunications network between the user's terminal and the computers has revolutionized the accessibility of the databases. Users make what is usually a local telephone call and link their terminals to a network, which allows them to connect to computers anywhere in the world. One network, the International Packet Switching Service (IPSS) links UK users with computers in the USA (and viceversa) and another, EURONET, links users and computers throughout the countries of the European Community.Among the chief advantages of online is the ability to modify the enquiry as the search progresses. Another advantage is the large amount of information that can be stored on computer discs. More access points to the information are possible when it is in this form rather than when it is subject to the limiting factors of the printed form, physical volume and type-setting costs. Thus in a printed book, access to the contents is made via the index; with an online equivalent, every word in the book could be searched. Retrieval of information online is also fast, a matter of minutes rather than hours or days. The information can be kept up to date easily because of the possibility of merging new items into the existing database.The crucial question from your viewpoint is how online systems may help translators. Professor Sager in 'Translating and the Computer' outlined the stages of decisions and actions involved in the production of a translation. 3 Online can be used in the first two of these: deciding whether to translate or request a translation, and preparing the rough translation. The data bases which can be exploited for these purposes are diverse in nature. They include the currently small but ever growing number designed specifically for people concerned with translations and the hundreds primarily aimed at information retrieval specialists in specific subject areas. The relevance of the former group will be obvious but the usefulness of the latter group resides in the extent to which data base producers gather and make searchable multilingual information.The first two stages of translation referred to above cover five different areas in which online data bases can aid translation.(1) Deciding whether a translation is required During the production of many data bases, material in many source languages is abstracted by linguists into the target language of the data base, usually but not always English. If the candidate document for translation is published, it is worth checking to see if an abstract exists in one of the online data bases. If so, the information content of the abstract might render a full translation unnecessary. Of course, there is nothing new in this approach. However, whereas previously searching for an abstract might have taken several hours and might therefore have been deemed not worthwhile, an online search can in a matter of minutes determine whether or not an abstract exists. Millions of references can be scanned for the combination of authors' names, title words and journal title which identify a particular paper. Because publicly-available data bases usually cover only published material and because, it must be admitted, the information content of abstracts varies between data bases, the usefulness of this approach is limited yet should not be overlooked.The same databases may also be used in finding whether a translation already exists, if it is likely to be in one of the cover-to-cover translated journals that exist in certain subject areas.More relevant, however, may be the World Transindex (WTI) which has recently been made publicly available online as file 33 of the Information Retrieval Service (IRS) in Frascati, Italy. This is accessed from the UK via the Dialtech Service of the Department of Industry. WTI holds details of translations collected since 1978 by the International Translation Centre in the Netherlands and the French Centre National de la Recherche Scientifique (CNRS). These include translations of scientific and technical literature from East European and Asiatic languages into Western languages and also translations of other Western languages into French. Because the information is online it is easy to provide a large number of access points to the information such as type of documents, target and source languages, publication year, title and subject index terms. Figure 1 shows a typical record. An example of a search would be to find details of translations of articles published in 1980 on solar energy from Russian into French. At present this is a fairly small database but it illustrates the possibilities inherent in online systems for sharing the collections of specialist centres and making them available over a wide area.(3) Gathering information in the subject area of a translation Information gathering may form one of the preliminaries of a translation if the translator is working in an unfamiliar area or if the subject of the translation is particularly recondite. An abstract of the paper, if it exists, may provide a valuable starting point, even if it is not considered to be full enough to act as a document surrogate. Interrogation of the same subject specific-data bases may reveal a review that will provide the necessary background.(4) Terminology By providing access to remote terminology banks, online makes its potentially greatest contribution to aiding translation. These data bases are like computerised dictionaries in that they provide equivalents of terms in a number of languages. However, not being limited by having to be produced in printed form, they can also include descriptive information for each term-equivalent pair, including usage samples, synonyms, definitions and grammatical information. Inclusion of the context of a term is particularly important in scientific and technical fields.The other major advantage is that terminology banks can be frequently updated with new technical terms. Translators have to deal with newly-developed situations, processes and materials. Dictionaries cannot provide this sort of information as the time lag between editions is too long, a minimum of 2.4 years, even in such fastmoving fields as electronics. 4 The alternative is to consult other translators or foreign specialists or research the topic in detail to be able to deduce the meaning, a process which can take up to 60 per cent of the total translation time. 4 So the dissemination of new terminology via online data bases can provide a much-needed aid to translation.As one of this afternoon's papers will deal with a terminology bank in detail, I will limit any further comments on the subject to mention of the European Community's terminology bank EURODICAUTOM, which is now publicly available via EURONET from the ECHO service of the CEC. Although there is currently much international activity in the area of producing standardized terminology in machine-readable form, encouraged and co-ordinated by Infoterm, 5 EURODICAUTOM appears to be the only terminology bank publicly and easily available at the moment.The bibliographic databases, however, contain a great deal of multilingual information which can be used in a similar way and have the advantage of being ready now. Databases covering such diverse areas as sociology, engineering and agriculture, (such as Sociological Abstracts, COMPENDEX and Commonwealth Agricultural Bureaux Abstracts), all carry article titles in the original language of publication, each significant word of which is searchable, as well as their translations (see Fig. 2 ). The pairs of titles show the term in question in context, which provides an important check on meaning. Data bases in languages other than English, if they still contain English titles, provide extra help in the form of non-English abstracts and indexing terms. PASCAL, a multidisciplinary French data base, is useful for such purposes (seeInvestigations on the pine weevil (Hylobius abietis Fig. 3 ) and also provides access to language pairs which do not include English. Such databases also provide detailed subject classifications and indexing which are not available for multi-purpose terminology banks.(5) Portraying language structure Translators are not always searching for term equivalents but sometimes want to find words related in a different way to the one they are starting with. Working within the English language we would go to Roget's Thesaurus or a thesaurus in a special subject area which would guide us to broader, narrower and related terms. Multilingual thesauri do exist and are good candidates for online treatment. Several large volumes are required if all the possible structural relationships as well as alphabetic indexes are to be provided in printed form. Online to such a data base, you could wander freely through a language, choosing from a multiplicity of entry points and tracing a conceptual path at will. Of course, this assumes that the relationships between the concepts have been identified in the first place. To quote our chairman again 'it is "ideas" not "words" that we transpose from one language and culture to another'. 2 When organizing any data base for computer searching, you soon find that you have to think very clearly about the ideas behind the information. Every relationship must be made explicit in order to allow automatic processing by the computer.If you will permit me to do a little star-gazing at this point, I would like to be able to see a time when all translators could have their personal files of information on microcomputers for online access, as some, doubtless, have at present. No longer would finding a piece of information be restricted by the alphabetical order of cards or the number of cross-references that the compiler could be bothered to write out. On the other hand, they would be forced to analyse the relationships between words and the different functions that the same word may have in varied contexts. This would indeed be an aid to translation.Coming back to earth, it must be said that online data bases will be no help unless they are readily available and cost-effective.Translators work in very different conditions: within translation departments of large organizations, with one or two colleagues in a medium-sized company or freelance, often far from centres of information. Are online systems equally available to all?I am afraid that I would be painting a false picture if I said this were so. Throughout this paper I have been using 'publicly available' to denote the accessibility of a particular data base to anyone who has signed contracts with the relevant system and telecommunication suppliers and who has a terminal and some means of connecting it to the public telephone network. There are many useful data bases which are not available in this way. The only thing that can be done here is to find out what exists and try to encourage the organizations who have developed data bases to share their expertise with others.To some of you, the conditions for accessing publicly-available databases may be daunting enough. Within large organizations, it is likely that the information department will already have a terminal which can be used. A company with one or two translators might consider buying a terminal and organizing access specifically for translation purposes. The Online Information Centre at Aslib can provide details of what has to be done. For the isolated freelance translator, there is the possibility of using a middleman, an information broker, to carry out searches on your behalf. The benefits are not as great as when you are present while the search is being conducted, but the process is essentially no different from telephoning a reference library for information. The Online Information Centre can provide a list of brokers.Cost-effectiveness is obviously important. Please note that I did not say cheapness. The costs of using online systems are clearly visible and may seem high until you realize, for example, that the time spent finding the words you want is also a significant cost factor. It might not be cost-effective for a freelance translator to have his own terminal until there is a great deal more information relevant to him from this source yet to a larger organization the increased speed of retrieval might make it worthwhile now.What are the costs? They vary greatly and I can give only an order of magnitude. A simple terminal can be bought for under £1,000 but they can also be hired. Telecommunications charges will be about £2.50 per hour on EURONET or £10.00 per hour on IPSS. Access to the data base may cost up to £10 per hour if it is subsidized or about £30 per hour at commercial rates. Information brokers' rates will vary from organization to organization. Remember, though, that you may need only 5-10 minutes to find your information.To summarize the current status of online data bases as aids for translators, there exists at the moment a small number of publicly-available data bases aimed specifically at translators or those responsible for the provision of translations. A much larger number of data bases is aimed at information retrieval specialists yet they provide multilingual information which is, at present, underutilized by translators. Availability of equipment and cost of these services may limit use for the time being but as the number of online sources directed at translators increases, as I am sure it will, it will become increasingly cost-effective to go online. | null | null | null | null | Main paper:
:
THE ENGLISH WORD is 'online' and its French equivalent 'conversationnel'. Although the main topic of this paper is how information transfer via online searching can enhance translation, I cannot resist the opportunity to start with this example of the reverse case-how translation may enhance information transfer. 'Online' merely states that you are in direct contact with a computer; 'conversationnel' implies that two-way communication is possible between you and the computer. The latter is nearer the truth. Online searching can be defined as the process of interactively searching for and retrieving information by computer from a machine-readable database. 1 Why should interaction be a desirable feature of computer use? Our chairman remarked during the 1978 seminar on 'Translating and the Computer' that 'if translators are to coexist with computers, we must become actively involved in directing their uses, let us be the masters and they the tools'. 2 Using the computer in an online mode helps us to achieve this control. It is well known that computers will only do what they are told. If an enquiry has to be formulated in a very complex language before it can be put to the computer, and this formulation involves computer scientists and a long delay, then we, non-computer scientists, do not have control of computers. Online searching allows us to put the questions in simple language. If you happen to phrase the question in the wrong way or if the computer does not have the information you require it will tell you so, immediately and, more or less, politely. You are then in a position to correct the question. Online searching is understandable and therefore controllable.In this paper I shall be asking the following questions: what is online? why is it useful? how does it help translators? where can you obtain access from? and how much does it cost?For the benefit of those who are not familiar with the concepts underlying online information retrieval I will give a brief explanation and hope that those to whom this may appear simplistic will bear with me.The user of the online system constructs an enquiry to be matched against the collection of information in a database. Some people have drawn the distinction between databases, containing bibliographic references, and data banks, containing non-bibliographic information. For our purposes today, I will not bother about this distinction and will refer solely to databases, meaning by this structured collections of any kind of information in machine readable form. Depending on the response to the enquiry, the user reformats his enquiry or modifies its scope until he is satisfied with the information retrieved.Communication with the database takes place via a keyboard terminal with a screen and/or a printer to display the interaction and the results. The data bases exist on computer discs made available by search system suppliers. Typically, each search system supplier will have a computer and will offer access to a number of data bases. The interposition of a telecommunications network between the user's terminal and the computers has revolutionized the accessibility of the databases. Users make what is usually a local telephone call and link their terminals to a network, which allows them to connect to computers anywhere in the world. One network, the International Packet Switching Service (IPSS) links UK users with computers in the USA (and viceversa) and another, EURONET, links users and computers throughout the countries of the European Community.Among the chief advantages of online is the ability to modify the enquiry as the search progresses. Another advantage is the large amount of information that can be stored on computer discs. More access points to the information are possible when it is in this form rather than when it is subject to the limiting factors of the printed form, physical volume and type-setting costs. Thus in a printed book, access to the contents is made via the index; with an online equivalent, every word in the book could be searched. Retrieval of information online is also fast, a matter of minutes rather than hours or days. The information can be kept up to date easily because of the possibility of merging new items into the existing database.The crucial question from your viewpoint is how online systems may help translators. Professor Sager in 'Translating and the Computer' outlined the stages of decisions and actions involved in the production of a translation. 3 Online can be used in the first two of these: deciding whether to translate or request a translation, and preparing the rough translation. The data bases which can be exploited for these purposes are diverse in nature. They include the currently small but ever growing number designed specifically for people concerned with translations and the hundreds primarily aimed at information retrieval specialists in specific subject areas. The relevance of the former group will be obvious but the usefulness of the latter group resides in the extent to which data base producers gather and make searchable multilingual information.The first two stages of translation referred to above cover five different areas in which online data bases can aid translation.(1) Deciding whether a translation is required During the production of many data bases, material in many source languages is abstracted by linguists into the target language of the data base, usually but not always English. If the candidate document for translation is published, it is worth checking to see if an abstract exists in one of the online data bases. If so, the information content of the abstract might render a full translation unnecessary. Of course, there is nothing new in this approach. However, whereas previously searching for an abstract might have taken several hours and might therefore have been deemed not worthwhile, an online search can in a matter of minutes determine whether or not an abstract exists. Millions of references can be scanned for the combination of authors' names, title words and journal title which identify a particular paper. Because publicly-available data bases usually cover only published material and because, it must be admitted, the information content of abstracts varies between data bases, the usefulness of this approach is limited yet should not be overlooked.The same databases may also be used in finding whether a translation already exists, if it is likely to be in one of the cover-to-cover translated journals that exist in certain subject areas.More relevant, however, may be the World Transindex (WTI) which has recently been made publicly available online as file 33 of the Information Retrieval Service (IRS) in Frascati, Italy. This is accessed from the UK via the Dialtech Service of the Department of Industry. WTI holds details of translations collected since 1978 by the International Translation Centre in the Netherlands and the French Centre National de la Recherche Scientifique (CNRS). These include translations of scientific and technical literature from East European and Asiatic languages into Western languages and also translations of other Western languages into French. Because the information is online it is easy to provide a large number of access points to the information such as type of documents, target and source languages, publication year, title and subject index terms. Figure 1 shows a typical record. An example of a search would be to find details of translations of articles published in 1980 on solar energy from Russian into French. At present this is a fairly small database but it illustrates the possibilities inherent in online systems for sharing the collections of specialist centres and making them available over a wide area.(3) Gathering information in the subject area of a translation Information gathering may form one of the preliminaries of a translation if the translator is working in an unfamiliar area or if the subject of the translation is particularly recondite. An abstract of the paper, if it exists, may provide a valuable starting point, even if it is not considered to be full enough to act as a document surrogate. Interrogation of the same subject specific-data bases may reveal a review that will provide the necessary background.(4) Terminology By providing access to remote terminology banks, online makes its potentially greatest contribution to aiding translation. These data bases are like computerised dictionaries in that they provide equivalents of terms in a number of languages. However, not being limited by having to be produced in printed form, they can also include descriptive information for each term-equivalent pair, including usage samples, synonyms, definitions and grammatical information. Inclusion of the context of a term is particularly important in scientific and technical fields.The other major advantage is that terminology banks can be frequently updated with new technical terms. Translators have to deal with newly-developed situations, processes and materials. Dictionaries cannot provide this sort of information as the time lag between editions is too long, a minimum of 2.4 years, even in such fastmoving fields as electronics. 4 The alternative is to consult other translators or foreign specialists or research the topic in detail to be able to deduce the meaning, a process which can take up to 60 per cent of the total translation time. 4 So the dissemination of new terminology via online data bases can provide a much-needed aid to translation.As one of this afternoon's papers will deal with a terminology bank in detail, I will limit any further comments on the subject to mention of the European Community's terminology bank EURODICAUTOM, which is now publicly available via EURONET from the ECHO service of the CEC. Although there is currently much international activity in the area of producing standardized terminology in machine-readable form, encouraged and co-ordinated by Infoterm, 5 EURODICAUTOM appears to be the only terminology bank publicly and easily available at the moment.The bibliographic databases, however, contain a great deal of multilingual information which can be used in a similar way and have the advantage of being ready now. Databases covering such diverse areas as sociology, engineering and agriculture, (such as Sociological Abstracts, COMPENDEX and Commonwealth Agricultural Bureaux Abstracts), all carry article titles in the original language of publication, each significant word of which is searchable, as well as their translations (see Fig. 2 ). The pairs of titles show the term in question in context, which provides an important check on meaning. Data bases in languages other than English, if they still contain English titles, provide extra help in the form of non-English abstracts and indexing terms. PASCAL, a multidisciplinary French data base, is useful for such purposes (seeInvestigations on the pine weevil (Hylobius abietis Fig. 3 ) and also provides access to language pairs which do not include English. Such databases also provide detailed subject classifications and indexing which are not available for multi-purpose terminology banks.(5) Portraying language structure Translators are not always searching for term equivalents but sometimes want to find words related in a different way to the one they are starting with. Working within the English language we would go to Roget's Thesaurus or a thesaurus in a special subject area which would guide us to broader, narrower and related terms. Multilingual thesauri do exist and are good candidates for online treatment. Several large volumes are required if all the possible structural relationships as well as alphabetic indexes are to be provided in printed form. Online to such a data base, you could wander freely through a language, choosing from a multiplicity of entry points and tracing a conceptual path at will. Of course, this assumes that the relationships between the concepts have been identified in the first place. To quote our chairman again 'it is "ideas" not "words" that we transpose from one language and culture to another'. 2 When organizing any data base for computer searching, you soon find that you have to think very clearly about the ideas behind the information. Every relationship must be made explicit in order to allow automatic processing by the computer.If you will permit me to do a little star-gazing at this point, I would like to be able to see a time when all translators could have their personal files of information on microcomputers for online access, as some, doubtless, have at present. No longer would finding a piece of information be restricted by the alphabetical order of cards or the number of cross-references that the compiler could be bothered to write out. On the other hand, they would be forced to analyse the relationships between words and the different functions that the same word may have in varied contexts. This would indeed be an aid to translation.Coming back to earth, it must be said that online data bases will be no help unless they are readily available and cost-effective.Translators work in very different conditions: within translation departments of large organizations, with one or two colleagues in a medium-sized company or freelance, often far from centres of information. Are online systems equally available to all?I am afraid that I would be painting a false picture if I said this were so. Throughout this paper I have been using 'publicly available' to denote the accessibility of a particular data base to anyone who has signed contracts with the relevant system and telecommunication suppliers and who has a terminal and some means of connecting it to the public telephone network. There are many useful data bases which are not available in this way. The only thing that can be done here is to find out what exists and try to encourage the organizations who have developed data bases to share their expertise with others.To some of you, the conditions for accessing publicly-available databases may be daunting enough. Within large organizations, it is likely that the information department will already have a terminal which can be used. A company with one or two translators might consider buying a terminal and organizing access specifically for translation purposes. The Online Information Centre at Aslib can provide details of what has to be done. For the isolated freelance translator, there is the possibility of using a middleman, an information broker, to carry out searches on your behalf. The benefits are not as great as when you are present while the search is being conducted, but the process is essentially no different from telephoning a reference library for information. The Online Information Centre can provide a list of brokers.Cost-effectiveness is obviously important. Please note that I did not say cheapness. The costs of using online systems are clearly visible and may seem high until you realize, for example, that the time spent finding the words you want is also a significant cost factor. It might not be cost-effective for a freelance translator to have his own terminal until there is a great deal more information relevant to him from this source yet to a larger organization the increased speed of retrieval might make it worthwhile now.What are the costs? They vary greatly and I can give only an order of magnitude. A simple terminal can be bought for under £1,000 but they can also be hired. Telecommunications charges will be about £2.50 per hour on EURONET or £10.00 per hour on IPSS. Access to the data base may cost up to £10 per hour if it is subsidized or about £30 per hour at commercial rates. Information brokers' rates will vary from organization to organization. Remember, though, that you may need only 5-10 minutes to find your information.To summarize the current status of online data bases as aids for translators, there exists at the moment a small number of publicly-available data bases aimed specifically at translators or those responsible for the provision of translations. A much larger number of data bases is aimed at information retrieval specialists yet they provide multilingual information which is, at present, underutilized by translators. Availability of equipment and cost of these services may limit use for the time being but as the number of online sources directed at translators increases, as I am sure it will, it will become increasingly cost-effective to go online.
Appendix:
| null | null | null | null | {
"paperhash": [],
"title": [],
"abstract": [],
"authors": [],
"arxiv_id": [],
"s2_corpus_id": [],
"intents": [],
"isInfluential": []
} | null | 531 | 0.003766 | null | null | null | null | null | null | null | null |
1e2af7ecd108269d1d455c41ee9601f04208f5e6 | 14201569 | null | Some Computational Aspects of Situation Semantics | Can a realist model theory of natural language be computationally plausible? Or, to put it another way, is the view of linguistic meaning as a relation between expressions of a natural language and things (objects, properties, etc.) in the world, as opposed to a relation between expressions and procedures in the head. consistent with a computational approach to understanding natural language? The model theorist must either claim that the answer is yes, or be willing to admit that humans transcend the computatlonally feasible in their use of language? Until recently the only model theory of natural language that was at all well developed was Montague Grammar. Unfortunately, it was based on the primitive notion of "possible world" and so was not a realist theory, unless you are prepared to grant that all possible worlds are real. Montague Grammar is also computatlonally intractable, for reasons to be discussed below. | {
"name": [
"Barwise, Jon"
],
"affiliation": [
null
]
} | null | null | 19th Annual Meeting of the Association for Computational Linguistics | 1981-06-01 | 12 | 10 | null | and I have developed a somewhat different approach to the model theory of natural language, a theor~ we call "Situation Semantics".Since one of my own motivations in the early days of this project was to use the insights of generalized racurslon theory to find a eomputatlonally plausible alternative to Montague Grammar, it seems fitting to give a progress report here.First, however, l can't resist putting my two cents worth into this continuing discussion. Procedural semantics starts from the observation that there is something computational about our understanding of natural language. This is obviously correct. Where some go astray, though, is in trying to identify the meaning of an expression with some sort of program run in the head. But programs are the sorts of things to HAVE meanings, not to BE meanings. A meaningful program sets up some sort of relationship between thingsperhaps a function from numbers to numbers, perhaps something much more sophisticated.But it is that relation which is its meaning, not some other program.The situation is analogous in the case of natural language. It is the relationships between things in the world that a language allows us to express that make a language meaningful. It is these relationships that are identified with the meanings of the expressions in model theory. The meaningful expressions are procedures that define these relations that are their meanings° At least this is the view that Perry and I take in situation semantics. As Quine has seen most clearly, the resulting view of semantics is one where to speak of a part of the world, as in (1). is to speak of the whole world and of all things in the world.(I) The dog with the red collar belongs to my son.There is a philosophical position that grows out of this view of logic, but it is not a practlc~l one for those who would implement the resulting model-theory as a theory of natural language. Any treatment of (I) that involves a universal quantification over all objects in the domain of discourse is doom"d by facts of ordinary discourse, e.g., the fact that I can make a statement llke (I) in a situation to describe another situation without making any statement at all about other dogs that come up later in a conversation, let alone about the dogs of Tibet.Logicians have been all too ready to dismiss such philosophical scruples as irrelevant to our task-especially shortsighted since the same problem is well known to have been an obstacle in developing recurslon theory, both ordinary recur sion theory and the generalizations to other domains like the functions of finite type.We forget that only in 1938, several years after his initial work in recurslon theory, did K/eene introduce the class of PARTIAL recurslve functions in order to prove the famous Zecurslon Theorem.We tend to overlook the significance of this move, from total to partial functions, until its importance is brought into focus in other contexts.This is Just what happened when Kleene developed his recurslon theory for functions of finite type.His initial formulation restricted attention to total functlons, total functions of total functlons, etc.Two very important principles fail in the resulting theory -the Substitution Theorem and the First Recurslon Theorem.theory has been raworked by Platek (1963) , Moschovakls (1975) , and by Kleene (1978 Kleene ( , 1980 using partial functions, partial functions of partial functions, etc., as the objects over which computations take place, imposing (in one way or another) the following constraint on all objects F of the theory: Persistence of Computations: If s is a partial function and F(s) is defined then F(s') m F(s) for every extension s" of a.In other words, it should not be possible to invalidate s computation that F(s) -a by simply adding further information to s. To put it yet another way, computations involving partial functions s should only be able to use positive information about s, not information of the form that s is undefined at this or that argument.To put it yet another way, F should be continuous in the topology of partial information.Computatlonally, we are always dealing with partial information and must insure persistence (continuity) of computations from it. But thls is just what blocks a straightforward implementation of the standard modeltheory--the whollstic view of the world which it is committed to, based on Frege's initial supposition.When one shifts from flrst-order model-theory to the index or "possible world" se~antics used in ~ionta~e's semantics for natural language, the whollstlc view must be carried to heroic lengths. For index semantics must embrace (as David Lewis does) the claim that talk about a particular actual situation talks indirectly not Just about everything which actually exists, but about all possible objects and all possible worlds.And It is just thls point that raises serious difficulties for Joyce Friedman and her co-workers in their attempt to implement ~iontague Grammar in a working system (Friedman and Warren, 1978) .The problem is that the basic formalization of possible world semantics is incompatible wlth the limitations imposed on us by partial information. Let me illustrate the problem thec arises in a very simple instance. In possible world semantics, the meaning of a word llke "talk' is a total function from the set I of ALL possible worlds to the set of ALL TOTAL functions from the set A of ALL possible individuals to the truth values 0, i. The intuition is that b talks in 'world" i if meaning('talk')(1)(d) -i.It is built into the formalism that each world contains TOTAL information about the extensions of all words and expressions of the language. The meaning of an adverb llke "rapidly" is a total function from such functions (from I into Fun(A,2)) to other such functions. is really inconsistent wlth the constraints placed on us by partial information. At the same tlme work on the semantics of perception statements led me away from possible worlds, while reinforcing my conviction that it was crucial to represent partial information about the world around us, information present in the perception of the scenes before us and of the situations in which we find ourselves all the time. | null | null | The world we perceive a-~ talk about consists not just of objects, nor even of just objects, properties and relations, hut of objects having properties and standing in various relations to one another; that is, we perceive and talk about various types of situations from the perspective of other situations.In situation semantics the meanlng of a sentence is a relation between various types of situations, types of discourse situations on the one har~ and types of "subject matter" sltuatio~s on the other. We represent various types of situations abstractly as PARTIAL functions from relations and objects to 0 and I. Expressions of a language heve a fixed llngulstlc meanlng, Indepe-~enC of the discourse situation. The same sentence (2) can be used in different types of discourse situations to express different propositions. Thus, we can treat the linguistic meaning of an expression as a function from discourse si~uatlon types to other complexes of objects a -a properties. Application of thlS function to a partioular discourse situation type we call the interpretation of the expression.In particular, the interpretation of a sentence llke (2) in a discourse situation type llke d iS a set of various situation types, including s* shove, but not including s.This set of types is called the proposition expressed by (2) .Various syntactic categories of natural language will have various sorts of interpretations. Verb phrases, e.g., will be interpreted by relations between objects and situation types. Definite descriptions will he interpreted as functions from situation types to individuals.The difference between referential and attributive uses of definite descriptions will correspond to different ways of using such a function, evaluation at s particular accessible situation, or to constrain other types within its domain. | At my talk I will illustrate the ideas discussed above by presenting a grammar and formal semantics for a fragment of English that embodies definite an d indefinite descriptions, restrictive and nonrestrictive relative clauses, and indexlcals llke "I", "you", "this" and "that". The aim is to have a semantic account that does not go through any sort of flrst-order "logical form", but operates off of the syntactic rules of English. The fragment incorporates both referential and attributive uses of descriptions.The basic idea is that descriptions are interpreted as functions from situation types to individuals, restrictive relative clauses are interpreted as functions from situation types to sub-types, and the interpretation of the whole is to be the composition of the functions interpreting the parts. Thus, the interpretations of "the", "dog", and "that talks" are given by the following three functions, respectively: of "the dog that talks" is Just the composition of these three functions.From a logical point of view, this is quite interesting. In first-order logic, the meaning of "the dog that talks' has to be built up from the meanings of 'the' and 'dog that talks', not from the meanings of "the dog* and 'that talks'. However, in situation semantics, since composition of functions is associative, we can combine the meanings of these expressions either way: f.(g.h) -(f.g).h. Thus, our semantic analysis is compatible with both of the syntactic structures argued for in the linguistic literature, the Det-Nom analysis and the NP-R analysis.One point that comes up in Situation Semantics that might interest people st this meeting Is the reinterpretaclon of composltlonality that it forces on one, more of a top-down than a bottom-up composltionallty.This makes it much more computatlonally tractible, since it allows us to work with much smaller amount of information. Unfortunately, a full discussion of this point is beyond the scope of such a small paper.Another important point not discussed is the constraint placed by the requirement of persistence discussed in section 2.It forces us to introduce space-time locations for the analysis of attrlbutive uses of definlte descriptions, locations that are also needed for the semantics of tense, aspect and noun phrases like "every man', "neither dog', and the Ilk,.The main point of this paper has been to alert the readers to a perspective in the model theory of natural language which they might well find interesting and useful. Indeed, they may well find that it is one that they have in many ways adopted already for other reasons. | Main paper:
actual situations and situation-types:
The world we perceive a-~ talk about consists not just of objects, nor even of just objects, properties and relations, hut of objects having properties and standing in various relations to one another; that is, we perceive and talk about various types of situations from the perspective of other situations.In situation semantics the meanlng of a sentence is a relation between various types of situations, types of discourse situations on the one har~ and types of "subject matter" sltuatio~s on the other. We represent various types of situations abstractly as PARTIAL functions from relations and objects to 0 and I. Expressions of a language heve a fixed llngulstlc meanlng, Indepe-~enC of the discourse situation. The same sentence (2) can be used in different types of discourse situations to express different propositions. Thus, we can treat the linguistic meaning of an expression as a function from discourse si~uatlon types to other complexes of objects a -a properties. Application of thlS function to a partioular discourse situation type we call the interpretation of the expression.In particular, the interpretation of a sentence llke (2) in a discourse situation type llke d iS a set of various situation types, including s* shove, but not including s.This set of types is called the proposition expressed by (2) .Various syntactic categories of natural language will have various sorts of interpretations. Verb phrases, e.g., will be interpreted by relations between objects and situation types. Definite descriptions will he interpreted as functions from situation types to individuals.The difference between referential and attributive uses of definite descriptions will correspond to different ways of using such a function, evaluation at s particular accessible situation, or to constrain other types within its domain.
a fragment of english involving definite and indefinite descriptions:
At my talk I will illustrate the ideas discussed above by presenting a grammar and formal semantics for a fragment of English that embodies definite an d indefinite descriptions, restrictive and nonrestrictive relative clauses, and indexlcals llke "I", "you", "this" and "that". The aim is to have a semantic account that does not go through any sort of flrst-order "logical form", but operates off of the syntactic rules of English. The fragment incorporates both referential and attributive uses of descriptions.The basic idea is that descriptions are interpreted as functions from situation types to individuals, restrictive relative clauses are interpreted as functions from situation types to sub-types, and the interpretation of the whole is to be the composition of the functions interpreting the parts. Thus, the interpretations of "the", "dog", and "that talks" are given by the following three functions, respectively: of "the dog that talks" is Just the composition of these three functions.From a logical point of view, this is quite interesting. In first-order logic, the meaning of "the dog that talks' has to be built up from the meanings of 'the' and 'dog that talks', not from the meanings of "the dog* and 'that talks'. However, in situation semantics, since composition of functions is associative, we can combine the meanings of these expressions either way: f.(g.h) -(f.g).h. Thus, our semantic analysis is compatible with both of the syntactic structures argued for in the linguistic literature, the Det-Nom analysis and the NP-R analysis.One point that comes up in Situation Semantics that might interest people st this meeting Is the reinterpretaclon of composltlonality that it forces on one, more of a top-down than a bottom-up composltionallty.This makes it much more computatlonally tractible, since it allows us to work with much smaller amount of information. Unfortunately, a full discussion of this point is beyond the scope of such a small paper.Another important point not discussed is the constraint placed by the requirement of persistence discussed in section 2.It forces us to introduce space-time locations for the analysis of attrlbutive uses of definlte descriptions, locations that are also needed for the semantics of tense, aspect and noun phrases like "every man', "neither dog', and the Ilk,.
conclusion:
The main point of this paper has been to alert the readers to a perspective in the model theory of natural language which they might well find interesting and useful. Indeed, they may well find that it is one that they have in many ways adopted already for other reasons.
john perry:
and I have developed a somewhat different approach to the model theory of natural language, a theor~ we call "Situation Semantics".Since one of my own motivations in the early days of this project was to use the insights of generalized racurslon theory to find a eomputatlonally plausible alternative to Montague Grammar, it seems fitting to give a progress report here.First, however, l can't resist putting my two cents worth into this continuing discussion. Procedural semantics starts from the observation that there is something computational about our understanding of natural language. This is obviously correct. Where some go astray, though, is in trying to identify the meaning of an expression with some sort of program run in the head. But programs are the sorts of things to HAVE meanings, not to BE meanings. A meaningful program sets up some sort of relationship between thingsperhaps a function from numbers to numbers, perhaps something much more sophisticated.But it is that relation which is its meaning, not some other program.The situation is analogous in the case of natural language. It is the relationships between things in the world that a language allows us to express that make a language meaningful. It is these relationships that are identified with the meanings of the expressions in model theory. The meaningful expressions are procedures that define these relations that are their meanings° At least this is the view that Perry and I take in situation semantics. As Quine has seen most clearly, the resulting view of semantics is one where to speak of a part of the world, as in (1). is to speak of the whole world and of all things in the world.(I) The dog with the red collar belongs to my son.There is a philosophical position that grows out of this view of logic, but it is not a practlc~l one for those who would implement the resulting model-theory as a theory of natural language. Any treatment of (I) that involves a universal quantification over all objects in the domain of discourse is doom"d by facts of ordinary discourse, e.g., the fact that I can make a statement llke (I) in a situation to describe another situation without making any statement at all about other dogs that come up later in a conversation, let alone about the dogs of Tibet.Logicians have been all too ready to dismiss such philosophical scruples as irrelevant to our task-especially shortsighted since the same problem is well known to have been an obstacle in developing recurslon theory, both ordinary recur sion theory and the generalizations to other domains like the functions of finite type.We forget that only in 1938, several years after his initial work in recurslon theory, did K/eene introduce the class of PARTIAL recurslve functions in order to prove the famous Zecurslon Theorem.We tend to overlook the significance of this move, from total to partial functions, until its importance is brought into focus in other contexts.This is Just what happened when Kleene developed his recurslon theory for functions of finite type.His initial formulation restricted attention to total functlons, total functions of total functlons, etc.Two very important principles fail in the resulting theory -the Substitution Theorem and the First Recurslon Theorem.theory has been raworked by Platek (1963) , Moschovakls (1975) , and by Kleene (1978 Kleene ( , 1980 using partial functions, partial functions of partial functions, etc., as the objects over which computations take place, imposing (in one way or another) the following constraint on all objects F of the theory: Persistence of Computations: If s is a partial function and F(s) is defined then F(s') m F(s) for every extension s" of a.In other words, it should not be possible to invalidate s computation that F(s) -a by simply adding further information to s. To put it yet another way, computations involving partial functions s should only be able to use positive information about s, not information of the form that s is undefined at this or that argument.To put it yet another way, F should be continuous in the topology of partial information.Computatlonally, we are always dealing with partial information and must insure persistence (continuity) of computations from it. But thls is just what blocks a straightforward implementation of the standard modeltheory--the whollstic view of the world which it is committed to, based on Frege's initial supposition.When one shifts from flrst-order model-theory to the index or "possible world" se~antics used in ~ionta~e's semantics for natural language, the whollstlc view must be carried to heroic lengths. For index semantics must embrace (as David Lewis does) the claim that talk about a particular actual situation talks indirectly not Just about everything which actually exists, but about all possible objects and all possible worlds.And It is just thls point that raises serious difficulties for Joyce Friedman and her co-workers in their attempt to implement ~iontague Grammar in a working system (Friedman and Warren, 1978) .The problem is that the basic formalization of possible world semantics is incompatible wlth the limitations imposed on us by partial information. Let me illustrate the problem thec arises in a very simple instance. In possible world semantics, the meaning of a word llke "talk' is a total function from the set I of ALL possible worlds to the set of ALL TOTAL functions from the set A of ALL possible individuals to the truth values 0, i. The intuition is that b talks in 'world" i if meaning('talk')(1)(d) -i.It is built into the formalism that each world contains TOTAL information about the extensions of all words and expressions of the language. The meaning of an adverb llke "rapidly" is a total function from such functions (from I into Fun(A,2)) to other such functions. is really inconsistent wlth the constraints placed on us by partial information. At the same tlme work on the semantics of perception statements led me away from possible worlds, while reinforcing my conviction that it was crucial to represent partial information about the world around us, information present in the perception of the scenes before us and of the situations in which we find ourselves all the time.
Appendix:
| null | null | null | null | {
"paperhash": [
"mccarthy|programs_with_common_sense"
],
"title": [
"Programs with common sense"
],
"abstract": [
"Abstract : This paper discusses programs to manipulate in a suitable formal language (most likely a part of the predicate calculus) common instrumental statements. The basic program will draw immediate conclusions from a list of premises. These conclusions will be either declarative or imperative sentences. When an imperative sentence is deduced the program takes a corresponding action. These actions may include printing sentences, moving sentences on lists, and reinitiating the basic deduction process on these lists."
],
"authors": [
{
"name": [
"J. McCarthy"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
}
]
}
],
"arxiv_id": [
null
],
"s2_corpus_id": [
"62564854"
],
"intents": [
[]
],
"isInfluential": [
false
]
} | null | 524 | 0.019084 | null | null | null | null | null | null | null | null |
1fbbf4ff722984837d734d8a7d2364ac9cd3adb6 | 10627100 | null | Search and Inference Strategies in Pronoun Resolution: An Experimental Study | The qusstlun of how people resolve pronouns has the various factors combine. | {
"name": [
"Ehrlich, Kate"
],
"affiliation": [
null
]
} | null | null | 19th Annual Meeting of the Association for Computational Linguistics | 1981-06-01 | 11 | 8 | null | been of interest to language theorists for a long time because so much of what goes on when people find referents for pronouns seems to lie at the heart of comprehension. However, despite the relevance of pronouns for comprehension and language cheorT, the processes chat contribute to pronoun resolution have proved notoriously difficult Co pin down.Part of the difficulty arises from the wide range of fac=ors that can affect which antecedent noun phrase in a tex~ is usderstood to be co-referentlal with a particular pronoun. These factors can range from simple number/gender agreement through selectional rescrlc~ions co quite complex "knowledge chat has been acquired from the CaxC (see Webber, (1978) for a neatly illustrated description of many of these factors). Research in psychology, artificial intelligence a~d linguistics has gone a long way toward identifying some of these factors and their role in pronoun resolu~ion.For instance, in psychology, research carried ouC by Caramazza =-d his colleagues (Caramazza et el, 1977) as well as research chat I have dune (Ehrllch, 1980) , has demuns~rated that number/sender agreement really c=-fumcciun to constrain the choice of referent in a way Chat signiflcantly facilltaCes processing. Within an AI framework, there has been some very interesting work carried out by Sidner (1977) m~d Grosz (1977) thac seeks to identify the current topic of a Cex1: and co show Chat knowledge of the topic can considerably sillily pronoun resolutlon.It is important that people are able co select appropriate referents for pronouns and co have some basis for that decision. The research discussed so far has mentioned some of the factors Chac contribute co chose decisiuns. However, part of ~he problem of really understanding how people resolve pronouns is knowing how Certainly it is important a~d useful to polnc to a particular factor as concributlng to a reference decision, but in many texts more than one of these factors will be available to a reader or listener. One problem for the theorist is then to explaln which factor predominates in the decision as well as to describe the scheduling of evaluaclon procedures. If it could be shown that there was a stricc ordering in which tests were applied, say, number/gender agreement followed by selectionai restrictions followed by inference procedures, pronoun resoluclon may be simp- These two types of strategy, which will be referred to msem¢-lically as inference and search strategies, have different kinds of characteristics. A search strategy dictates the order in which candldaces are evaluated, but has no machinery for carrying out the evaluation.The inference strategy helps to set up the representaclon of the information in the cexC agains c which candldacas can be evaluated, but has ~o way of finding the c~aldidates. ~n the rest of this paper, she way these straCegles ~ighc interact will be explored and the results of two studies will be reported that bear on the issues.One possible search strategy is ~o examine candidates serially beginning with the one menKioned most recently and working back through the text. This strategy makes some sense because, as Hobbs (1978) has pointed out, most pronouns co-refer with antecedents Chat were menr.laned within the last few senuences.Thus, a serial search s~rategy provides a principled way of rescric~Lng how a text is searched. Moreover, there is some evidence fro~ psychological research ~hat it takes longer to resolve pronouns when the antecedent wlch which the pronotn~ co-refers is far rather than near the pronoun (e.g. Clark & $engul, 1979; SprlnEston, 1975 (2) John sold a car to Fred because he needed it a series of inferences based in part an out knowledge of selling a~d needing, supports ~he selection of Fred rather ~h=m John as referent for the pronoun "he". In the experiments to be reported, it was 'lexical'inferences ra~her ~han the oCher kind that were manipulated.Subjects in ~he experiment were asked to read texts such as the a~e given below:( In either case the inference will be drawn in response to r/Re need to decide on the acceptability of the candidate. In the second model, the inference is triggered by the anaphoric expression, e.g. "in his room" An the third sentence, and the need to relate chat expression to the location "inside" mentioned in a previous sentence. The inference is expected to take a certain amotmt of time to be drawn (cf. Kintsch, 1974) .According to the second model, one would expect that in cases where the antecedent is near the pronoun, there will be some effect due to inference because the process may not be completed in time to answer the question. When the antecedent is far from the pronoun, however, the inference process will be completed and hence no effect of inference should still be detected. Webber, 1978) .The picture of pronoun resolution that emerges from the studies reported here, is one in which effects of distance between the pronoun and its antecedent may play some role, not as a predicator of pronominal reference as has often been ~houEht, but as part of a search strateEy. There certainly are cases where nearer antecedents seem to be preferred over ones further back in the text; however, it is more profitable to look ~o concepts such as foregroundin E (of. Chafe, 1974) rather than silnple recency for explanations of the preference.• It is also of some interest to have shown that inferences ~my con~rlbute ~o pronoun resolution huc drawn for other reasons. | null | null | null | null | Main paper:
:
been of interest to language theorists for a long time because so much of what goes on when people find referents for pronouns seems to lie at the heart of comprehension. However, despite the relevance of pronouns for comprehension and language cheorT, the processes chat contribute to pronoun resolution have proved notoriously difficult Co pin down.Part of the difficulty arises from the wide range of fac=ors that can affect which antecedent noun phrase in a tex~ is usderstood to be co-referentlal with a particular pronoun. These factors can range from simple number/gender agreement through selectional rescrlc~ions co quite complex "knowledge chat has been acquired from the CaxC (see Webber, (1978) for a neatly illustrated description of many of these factors). Research in psychology, artificial intelligence a~d linguistics has gone a long way toward identifying some of these factors and their role in pronoun resolu~ion.For instance, in psychology, research carried ouC by Caramazza =-d his colleagues (Caramazza et el, 1977) as well as research chat I have dune (Ehrllch, 1980) , has demuns~rated that number/sender agreement really c=-fumcciun to constrain the choice of referent in a way Chat signiflcantly facilltaCes processing. Within an AI framework, there has been some very interesting work carried out by Sidner (1977) m~d Grosz (1977) thac seeks to identify the current topic of a Cex1: and co show Chat knowledge of the topic can considerably sillily pronoun resolutlon.It is important that people are able co select appropriate referents for pronouns and co have some basis for that decision. The research discussed so far has mentioned some of the factors Chac contribute co chose decisiuns. However, part of ~he problem of really understanding how people resolve pronouns is knowing how Certainly it is important a~d useful to polnc to a particular factor as concributlng to a reference decision, but in many texts more than one of these factors will be available to a reader or listener. One problem for the theorist is then to explaln which factor predominates in the decision as well as to describe the scheduling of evaluaclon procedures. If it could be shown that there was a stricc ordering in which tests were applied, say, number/gender agreement followed by selectionai restrictions followed by inference procedures, pronoun resoluclon may be simp- These two types of strategy, which will be referred to msem¢-lically as inference and search strategies, have different kinds of characteristics. A search strategy dictates the order in which candldaces are evaluated, but has no machinery for carrying out the evaluation.The inference strategy helps to set up the representaclon of the information in the cexC agains c which candldacas can be evaluated, but has ~o way of finding the c~aldidates. ~n the rest of this paper, she way these straCegles ~ighc interact will be explored and the results of two studies will be reported that bear on the issues.One possible search strategy is ~o examine candidates serially beginning with the one menKioned most recently and working back through the text. This strategy makes some sense because, as Hobbs (1978) has pointed out, most pronouns co-refer with antecedents Chat were menr.laned within the last few senuences.Thus, a serial search s~rategy provides a principled way of rescric~Lng how a text is searched. Moreover, there is some evidence fro~ psychological research ~hat it takes longer to resolve pronouns when the antecedent wlch which the pronotn~ co-refers is far rather than near the pronoun (e.g. Clark & $engul, 1979; SprlnEston, 1975 (2) John sold a car to Fred because he needed it a series of inferences based in part an out knowledge of selling a~d needing, supports ~he selection of Fred rather ~h=m John as referent for the pronoun "he". In the experiments to be reported, it was 'lexical'inferences ra~her ~han the oCher kind that were manipulated.Subjects in ~he experiment were asked to read texts such as the a~e given below:( In either case the inference will be drawn in response to r/Re need to decide on the acceptability of the candidate. In the second model, the inference is triggered by the anaphoric expression, e.g. "in his room" An the third sentence, and the need to relate chat expression to the location "inside" mentioned in a previous sentence. The inference is expected to take a certain amotmt of time to be drawn (cf. Kintsch, 1974) .According to the second model, one would expect that in cases where the antecedent is near the pronoun, there will be some effect due to inference because the process may not be completed in time to answer the question. When the antecedent is far from the pronoun, however, the inference process will be completed and hence no effect of inference should still be detected. Webber, 1978) .The picture of pronoun resolution that emerges from the studies reported here, is one in which effects of distance between the pronoun and its antecedent may play some role, not as a predicator of pronominal reference as has often been ~houEht, but as part of a search strateEy. There certainly are cases where nearer antecedents seem to be preferred over ones further back in the text; however, it is more profitable to look ~o concepts such as foregroundin E (of. Chafe, 1974) rather than silnple recency for explanations of the preference.• It is also of some interest to have shown that inferences ~my con~rlbute ~o pronoun resolution huc drawn for other reasons.
Appendix:
| null | null | null | null | {
"paperhash": [
"ehrlich|comprehension_of_pronouns",
"grosz|the_representation_and_use_of_focus_in_a_system_for_understanding_dialogs",
"bullwinkle|levels_of_complexity_in_discourse_for_anaphora_disambiguation_and_speech_act_interpretation"
],
"title": [
"Comprehension of Pronouns",
"The Representation and Use of Focus in a System for Understanding Dialogs",
"Levels of Complexity in Discourse for Anaphora Disambiguation and Speech Act Interpretation"
],
"abstract": [
"An experiment is reported in which subjects had to choose referents for pronouns in sentences such as: John blamed Bill because he spilt the coffee. To examine whether the choice of referent is influenced by features of the main verb or by the events described in the sentence, the relation between the events was altered by changing the conjunction. A significant effect of conjunction was obtained, but only when both antecedents matched the gender of the pronoun. When only one antecedent matched the pronoun, referents were chosen faster. From these results it is argued that readers use general knowledge to select referents for pronouns when gender does not identify a unique referent. A further effect of sentence structure on the time taken to select a referent was interpreted as showing that subjects analysed the sentences clause by clause.",
"As a dialog progresses the objects and actions that are most relevant to the conversation, and hence in the focus of attention of the dialog participants, change. This paper describes a representation of focus for language understanding systems, emphasizing its use in understanding task-oriented dialogs. The representation highlights that part of the knowledge base relevant at a given point in a dialog. A model of the task is used both to structure the focus representation and to provide an index into potentially relevant concepts in the knowledge base The use of the focus representation to make retrieval of items from the knowledge base more efficient is described.",
"This paper presents a discussion of means of describing the discourse and its components which makes speech act interpretation and anaphora disambiguation possible with minimal search of the knowledge in the database. A portion of this paper will consider how a frames representation of sentences and common sense knowledge provides a mechanism for representing the postulated discourse components. Finally some discussion of the use of the discourse model and of frames in a discourse understanding program for a personal assistant will be presented."
],
"authors": [
{
"name": [
"Karen Ehrlich"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"B. Grosz"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"C. Bullwinkle"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
}
]
}
],
"arxiv_id": [
null,
null,
null
],
"s2_corpus_id": [
"143747048",
"2484798",
"11005077"
],
"intents": [
[],
[],
[]
],
"isInfluential": [
false,
false,
false
]
} | null | 524 | 0.015267 | null | null | null | null | null | null | null | null |
c3cd52ea87505bbd1a71d8750199962b3186fba5 | 18978174 | null | What{'}s Necessary to Hide?: Modeling Action Verbs | This paper considers what types of knowledge one must possess in order to reason about actions. Rather than concentrating on how actions are performed, as is done in the problem-solving literature, it examines the set of conditions under which an action can be said to have occurred. In other words, if one is told that action A occurred, what can be inferred about the state of the world? In particular, if the representation can define such conditions, it must have good models of time, belief, and intention. This paper discusses these issues and suggests a formalism in which general actions and events can be defined. Throughout, the action of hiding a book from someone is used as a motivating example. I. Introductio, This paper suggests a formulation of events and actions that seems powerful enough to define a wide range of event and action verbs in English. This problem is interesting for two reasons• The first is that such a model is necessary to express the meaning of many sentences. The second is to analyze the language production and comprehension processes themselves as purposeful action. This was suggested some time ago by Bruce [1975] and Schmidt [1975]. Detailed proposals have been implemented recently for some aspects of language production [Cohen, 1978] and comprehension [Alien. 1979]. As interest in these methods grows (e.g., see [Grosz, 1979; Brachman, 1979]). the inadequacy of existing action models becomes increasingly obvious. | {
"name": [
"Allen, James F."
],
"affiliation": [
null
]
} | null | null | 19th Annual Meeting of the Association for Computational Linguistics | 1981-06-01 | 21 | 15 | null | The formalism for actions used in most natural language understanding systems is based on case grammar. Each action is represented by a set of assertions about the • semantic roles the noun phrases play with respect to the verb. Such a tbrmalism is a start, but does not explain how to represent what an action actually signifies. If one is told that a certain action occurred, what does one know about how the world changed (or didn't change!). This paper attempts to answer this question by oudining a temporal logic in which the occurrence of actions can be tied to descriptions of the world over time.One possibility for such a mechanism is found in the work on problem-solving systems (e.g. [I:ikes and Nilsson, 197] ; Sacerdoti, 1975] ), which suggests one common formulation of action. An acuon is a function from one world state to a succeeding world state and is described by a set of prerequisites and effects, or by decomposition into more primitive actions. While this model is extremely useful for modeling physical actions by a single actor, it does not cover a large class of actions describable in I-ngiish. [:or instance, many actions seemingly describe nml-activity (e.g. standing still), or acting in some nonspecified manner to preserve a state (e.g. preventing your televismn set from being stolen). Furthermore, many action descriptions appear to be a composition of simpler actions that are simultaneously executed. For instance, "Walking to the store while juggling three bails" seems to be composed of the actions of "walking to the store and "juggling three bails."It is not clear how such an action could be defined from the two simpler actions if we view actions as functions from one state to another.The approach suggested here models events simply as partial descriptions of the world over some Lime interval. Actions are then defined as a subclass of events that involve agents. Thus, it is simple to combine two actions into a new action, The new description simply consists of the two simpler descriptions hglding over the same intervalThe notions of prerequisite, result, and methods of performing actions will not arise in this study. While they are iraportant for reasoning about how to attain goals, they don't play an explicit role in defining when an action can be said to have occurred. To make this point clear, consider the simple action of turning on a light.There are few physical activities that are a necessary part of performing this action, Depending on the context, vastly different patterns or" behavior can be classified as the same action, l;or example, turning on a light usually involves Hipping a light switch, but in some circumstances it may involve tightening the light bulb (in the basement). or hitting the wail (m an old house). Although we have knowledge about how the action can be pertbrmed, this does nol define what the action is. The key defining characteristic of turning on the light seems to be that the agent is performing some activity which will cause the light, which is off when the action starts, to become on when the action ends. The importance of this observation is that we could recognize an observed pattern of activity as "turning on the light" even if we had never seen or thought about that pattern previously. The model described here is in many ways similar to that of Jackendoff [1976] . He provides a classification of event verbs that includes verbs of change (GO verbs) and verbs that assert a state remaining constant over an interval of time (STAY verbs), and defines a representation of action verbs of both typesby introducing the notion of agentive causality and permission. However, Jackendoff does not consider in detail how specific actions might be precisely defined with respect to a world model. The next two sections of this paper will introduce the temporal logic and then define the framework for defining events and actions. To be as precise as possible, I have remained within the notation of the first order predicate calculus• Once the various concepts are precisely defined, the next necessary step in this work is to define a computaUonally feasible representation and inference process, Some of this work has already been done. For example, a computational model of the temporal logic can be found in Allen [198.1] • Other areas axe currently under investigation.The final section demonstrates the generality of the approach by analyzing the action of hiding a book from someone. In this study, various other important conceptual entities such as belief, intention, and causality are briefly discussed. Finally, a definition of.what it means to hide something is presented using these tools.In order to define the role that events and actions play in the logic, the logical form of sentences asserting that an event has occurred must be discussed. Once even~ have been defined, actions will be defined in terms of them. One suggestion for the logical form is to define for each c[,,~ of events a property such that the property HOI.I)S only if the event occurred. This can be discarded immediately as axiom (A.]) is inappropriate for events. If an event occurred over some time interval "['. it does not mean that the event also occurred over all subintervals of T. So we introduce a new type of object in the logic, namely events, and a new predicate OCCUlt. l),y representing events as objects in the logic, we have avoided the difficulties described in Davidson [1967] .Simply giving the logical form of an event is only a small part of the analysis. We must also define for each event the set of conditions that constitute its occurrence. As mentioned in the introduction, there seems to be no restriction on what kind of conditions can he used to define an event except that they must partially describe the world over some time interval.For example, the event "the ball moving from x to y" could be modeled by a predicate MOVE with four arguments: the object, the source, the goal location, and the move event itself. Thus, MOVI' (IlalL x. y. m) asserts that m is an event consisting of the ball moving from x to y. We assert that this event occurred over time t by adding the assertionWith these details out of the way. we can now define necessary and sufficient conditions for the event's occurrence. For this simple class of move events, we need an axiom such as: A simple class of events consists of those that occur only if some property remains constant over a particular interval (c£ Jackendoffs STAY verbs). For example, we may assert in l'nglish "The ball was in the room during T.'" "The ball remained in the room during T."(forall object,While these appear to be logically equivalent, they may have very different consequences in a conversation. This formalism supports this difference. The former sentence asserts a proposition, and hence is of the formwhile the latter sentence describes an event, and hence is of the formWe may capture the logical equivalence of the two with the axiom: O'orall b.r,e,O REMAIN-IN(b,r,e) The problem remains as to how the differences between these logically equivalent formulas arise in context. One possible difference is that the second may lead the reader to believe that it easily might not have been the case.Actions are events that involve an agent in one of two ways. The agent may cause the event or may allow the event (cf. [Jackendoff, 1976] ). Corresponding to these two types of agency, there are two predicates, ACAUSE and ALLOW, that take an agent, an event, and an action as arguments. Thus the assertion corresponding to "John moved 13 from S to G" is MO VE (B, G,S, el) The remainder of this paper applies the above formalism to the analysis of the action of hiding a book from someone. Along the way, we shall need to introduce some new representational tools for the notions of belief, intention, and causality,The definition of hiding a book should be independent of any method by which the action was performed, for, depending on the context, the actor could hide a book in many different ways. For instance, the actor could put the book behind a desk, -stand between the book and the other agent while they are in the same room, or call a friend Y and get her or him to do one of the above.Furthermore, the actor might hide ).he book by simply not doing something s/he intended to do. I:or example, assume Sam is planning to go to lunch with Carole after picking Carole up at Carole's office, if, on the way out of Sam's office, Sam decides not to take his coat because he doesn't want Carole to see it, then Sam has hidden the coat from Carole. Of course, it is crucial here that Sam believed that he normally would have taken the coat. Sam couldn't have hidden his coat by forgetting to bring it.This example brings up a few key points that may not be noticed from the first three examples. First' Sam must have intended to hide the coat. Without this intention (i.e., in the forgetting case), no such action occurs. Second, Sam must have believed that it was likely that Carole would see the coat in the future course of events. Finally, Sam must have acted in such a way that he then believed that Carole would not see the coat in the future course of events. Of course, in this case, the action Sam performed was "not bringing the coat," which would normally not be considered an action unless it was intentionally not done. I claim that these three conditions provide a reasonably accurate definition of what it means to hide something. They certainly cover the four examples presented above. As stated previously, however, the definition is rather unsatisfactory, as many extremely difficult concepts, such as belief and intention, were thrown about casually.There is much recent work on models of belief (e.g., [Cohen, 1978; Moore, 1979; Perils, 1981 " Haas, 1981 ). l have little to add to these efforts, so the reader may assume his or her favorite model. I will assume that belief is a modal operator and is described by a set of axioms along the [iu~ of Hintikka [I962] . The one important thing to notice, though, is that there are two relevant time indices to each belief; namely, the time over which the belief is held, and the time over which the proposition that is believed holds. For example. I might believe ~oda.v that it rained last weekend. This point wiil be crucial in modeling the action of hiding. To introduce some notation, let "A believes (during To) that p holds (during Tp)" be expressed asThe notion of intention is much less understood than the notion of belief. However, let us approximate the statement "A intends (during Ti) that action a happen (during Ta)" by and "A believes (during Ti)that a happen (during Ta)" "A wants (during Ti) that a happen (during Ta)" This is obviously not a philosophically adequate definiuon (e.g., see [Searle, 1980] ), but seems sufficient for our present purposes. The notion of wanting indicates that the actor finds the action desirable given the alternatives. This notion appears impossible to axiomatize as wants do not appear to be rational (e.g. Hare []97]]). However, by adding the belief that the action will occur into the notion of intention, we ensure that intentions must be at least as consistent as beliefs.Actions may be performed intentionally or unintentionally. For example, consider the action of breaking a window. Inferring intentionality from observed action is a crucial ability needed in order to communicate and cooperate with other agents. While it is difficult to express a logical connection between action and intention, one can identify pragmatic or plausible inferences that can be used in a computational model (see [Allen, 1979] ).With these tools, we can attempt a more precise definition of hiding. The time intervals that will be required are:Th--the time of the hiding event;Ts--the time that Y is expected to see the book;Tbl--the time when X believes Y will see the book during "l's, which must be BEFORE "l'h;Tb3--the time when X believes Y will not see the book during Ts, which must be BEI"ORE or DURING Th and AI"I'I'~R Tbl.We will now define the predicate H I D I. '(agent, observer, object, a~t) which asserts that act is an action of hiding. Since it describes an action, we have the simple axiom capturing agency: (forall agent, observer, obJect, act H I D l:'(agent, observer, object, act) =) (Exists e ACAUSE(agent, e, act)))l.et us also introduce an event predicate S E l:'(agent, object, e) which asserts that e is an event consisting of agent seeing the object.Now we can define HIDE as follows: (forall ag, obs, o.a. 77z, HIDl'.'(ag.obs, o, a) (obs, o,e) and the intervals Th, Ts, Tb], Tb3 are related as discussed above. Condition (4) defines e as a seeing event, and might also need to be within ag's beliefs.This definition is lacking part of our analysis; namely that there is no mention that the agent's beliefs changed because of something s/he did. We can assert that the agent believes (between Tbl and Tb3) he or she will do an action (between Tbl and Th) as follows: (existx" al, el, Tb2 5) ACAUSlf(a&el, aD 6) H O LDS(believes(ag, OCC UR(al, Tal) ), Tb2) where 7"b1 ( Tb2 ( Tb3 and Tbl (But this has not caused the change in (3) are true, asserting Tal ( Tit captured the notion that belief (6) belief from (2) to (3). Since (6) and a logical implication from (6) to (3) would have no force. It is essential that the belief (6) be a key-element in the reasoning that leads to belief (3).To capture this we must introduce a notion of causality. This notion differs from ACAUSE in many ways (e.g. see [Taylor, 1966] ), but for us the major difference is that, unlike ACAUSE, it suggests no relation to intentionality. While ACAUSE relates an agent to an event, CAUSE relates events to events. The events in question here would be coming to the belief (6), which CAUSES coming to the belief (3).One can see that much of what it means to hide is captured by the above. In particular, the following can be extracted directly from the definition: -if you hide something, you intended to hide it, and thus can be held responsible for the action's consequences;one cannot hide something if it were not possible that it could be seen, or if it were certain that it would be seen anyway; -one cannot hide something simply by changing one's mind about whether it will be seen.In addition, there ate many other possibilities related to the temporal order of events. For instance, you can't hide something by performing an action after ,,he hiding is supposed to be done.I have introduced a representation for events and actions that is based on an interval-based temporal logic. This model is sufficiently powerful to describe events and actions that involve change, as well as those that involve maintaining a state. In addition, the model readily allows the composition and modification of events and actions.In order to demonstrate the power of the model, the action of hiding was examined in detail. This forced the introduction of the notions of belief, intention, and causality. While this paper does not suggest any breakthroughs in representing these three concepts, it does suggest how they should interact with the notions of time, event, and action.At present, this action model is being extended so that reasoning about performing actions can be modeled. This work is along the lines described in [Goldman, 1970] . | null | null | Before we can characterize events and actions, we need to specify a temporal logic. The logic described here is based on temporal intervals. Events that appear to refer to a point in time (i.e., finishing a race) are considered to be implicitly referring to another event's beginning or ending. Thus the only time points we will see will be the endpoints of intervals.The logic is a typed first order predicate calculus, in which the terms fall into the following three broad categories:-terms of type TIME-INTERVAL denodng time intervals;terms of type PROPERTY, denoting descriptions that can hold or not hold during a particular time; and terms corresponding to objects in the domain.There are a small number of predicates. One of the most important is HOLDS, which asserts that a property holds (i.e., is true) during a time interval..Thusis true only if property p holds during t. As a subsequent axiom will state, this is intended to mean that p holds at every subinterval oft as well.There is no need to investigate the behavior of MEETS(tl, t2)--interval tl is before interval 12, but there is no interval between them, i.e., tl ends where t2. starts. Given these predicates, there is a set of axioms defining their interrelations. For example, there are axioms dealing with the transitivity of the temporal relationships. Also, there is the axiom mentioned previously when the HOI,I)S predicate wa~ introduced: namely HOI,DS(p.tl) This gives us enough tools to define the notion of action in the next section.(A.]) IfOLDS(p.t) & DURING(tl.t) --) | null | Main paper:
a temporal l,ogie:
Before we can characterize events and actions, we need to specify a temporal logic. The logic described here is based on temporal intervals. Events that appear to refer to a point in time (i.e., finishing a race) are considered to be implicitly referring to another event's beginning or ending. Thus the only time points we will see will be the endpoints of intervals.The logic is a typed first order predicate calculus, in which the terms fall into the following three broad categories:-terms of type TIME-INTERVAL denodng time intervals;terms of type PROPERTY, denoting descriptions that can hold or not hold during a particular time; and terms corresponding to objects in the domain.There are a small number of predicates. One of the most important is HOLDS, which asserts that a property holds (i.e., is true) during a time interval..Thusis true only if property p holds during t. As a subsequent axiom will state, this is intended to mean that p holds at every subinterval oft as well.There is no need to investigate the behavior of MEETS(tl, t2)--interval tl is before interval 12, but there is no interval between them, i.e., tl ends where t2. starts. Given these predicates, there is a set of axioms defining their interrelations. For example, there are axioms dealing with the transitivity of the temporal relationships. Also, there is the axiom mentioned previously when the HOI,I)S predicate wa~ introduced: namely HOI,DS(p.tl) This gives us enough tools to define the notion of action in the next section.(A.]) IfOLDS(p.t) & DURING(tl.t) --)
events and actions:
In order to define the role that events and actions play in the logic, the logical form of sentences asserting that an event has occurred must be discussed. Once even~ have been defined, actions will be defined in terms of them. One suggestion for the logical form is to define for each c[,,~ of events a property such that the property HOI.I)S only if the event occurred. This can be discarded immediately as axiom (A.]) is inappropriate for events. If an event occurred over some time interval "['. it does not mean that the event also occurred over all subintervals of T. So we introduce a new type of object in the logic, namely events, and a new predicate OCCUlt. l),y representing events as objects in the logic, we have avoided the difficulties described in Davidson [1967] .Simply giving the logical form of an event is only a small part of the analysis. We must also define for each event the set of conditions that constitute its occurrence. As mentioned in the introduction, there seems to be no restriction on what kind of conditions can he used to define an event except that they must partially describe the world over some time interval.For example, the event "the ball moving from x to y" could be modeled by a predicate MOVE with four arguments: the object, the source, the goal location, and the move event itself. Thus, MOVI' (IlalL x. y. m) asserts that m is an event consisting of the ball moving from x to y. We assert that this event occurred over time t by adding the assertionWith these details out of the way. we can now define necessary and sufficient conditions for the event's occurrence. For this simple class of move events, we need an axiom such as: A simple class of events consists of those that occur only if some property remains constant over a particular interval (c£ Jackendoffs STAY verbs). For example, we may assert in l'nglish "The ball was in the room during T.'" "The ball remained in the room during T."(forall object,While these appear to be logically equivalent, they may have very different consequences in a conversation. This formalism supports this difference. The former sentence asserts a proposition, and hence is of the formwhile the latter sentence describes an event, and hence is of the formWe may capture the logical equivalence of the two with the axiom: O'orall b.r,e,O REMAIN-IN(b,r,e) The problem remains as to how the differences between these logically equivalent formulas arise in context. One possible difference is that the second may lead the reader to believe that it easily might not have been the case.Actions are events that involve an agent in one of two ways. The agent may cause the event or may allow the event (cf. [Jackendoff, 1976] ). Corresponding to these two types of agency, there are two predicates, ACAUSE and ALLOW, that take an agent, an event, and an action as arguments. Thus the assertion corresponding to "John moved 13 from S to G" is MO VE (B, G,S, el) The remainder of this paper applies the above formalism to the analysis of the action of hiding a book from someone. Along the way, we shall need to introduce some new representational tools for the notions of belief, intention, and causality,The definition of hiding a book should be independent of any method by which the action was performed, for, depending on the context, the actor could hide a book in many different ways. For instance, the actor could put the book behind a desk, -stand between the book and the other agent while they are in the same room, or call a friend Y and get her or him to do one of the above.Furthermore, the actor might hide ).he book by simply not doing something s/he intended to do. I:or example, assume Sam is planning to go to lunch with Carole after picking Carole up at Carole's office, if, on the way out of Sam's office, Sam decides not to take his coat because he doesn't want Carole to see it, then Sam has hidden the coat from Carole. Of course, it is crucial here that Sam believed that he normally would have taken the coat. Sam couldn't have hidden his coat by forgetting to bring it.This example brings up a few key points that may not be noticed from the first three examples. First' Sam must have intended to hide the coat. Without this intention (i.e., in the forgetting case), no such action occurs. Second, Sam must have believed that it was likely that Carole would see the coat in the future course of events. Finally, Sam must have acted in such a way that he then believed that Carole would not see the coat in the future course of events. Of course, in this case, the action Sam performed was "not bringing the coat," which would normally not be considered an action unless it was intentionally not done. I claim that these three conditions provide a reasonably accurate definition of what it means to hide something. They certainly cover the four examples presented above. As stated previously, however, the definition is rather unsatisfactory, as many extremely difficult concepts, such as belief and intention, were thrown about casually.There is much recent work on models of belief (e.g., [Cohen, 1978; Moore, 1979; Perils, 1981 " Haas, 1981 ). l have little to add to these efforts, so the reader may assume his or her favorite model. I will assume that belief is a modal operator and is described by a set of axioms along the [iu~ of Hintikka [I962] . The one important thing to notice, though, is that there are two relevant time indices to each belief; namely, the time over which the belief is held, and the time over which the proposition that is believed holds. For example. I might believe ~oda.v that it rained last weekend. This point wiil be crucial in modeling the action of hiding. To introduce some notation, let "A believes (during To) that p holds (during Tp)" be expressed asThe notion of intention is much less understood than the notion of belief. However, let us approximate the statement "A intends (during Ti) that action a happen (during Ta)" by and "A believes (during Ti)that a happen (during Ta)" "A wants (during Ti) that a happen (during Ta)" This is obviously not a philosophically adequate definiuon (e.g., see [Searle, 1980] ), but seems sufficient for our present purposes. The notion of wanting indicates that the actor finds the action desirable given the alternatives. This notion appears impossible to axiomatize as wants do not appear to be rational (e.g. Hare []97]]). However, by adding the belief that the action will occur into the notion of intention, we ensure that intentions must be at least as consistent as beliefs.Actions may be performed intentionally or unintentionally. For example, consider the action of breaking a window. Inferring intentionality from observed action is a crucial ability needed in order to communicate and cooperate with other agents. While it is difficult to express a logical connection between action and intention, one can identify pragmatic or plausible inferences that can be used in a computational model (see [Allen, 1979] ).With these tools, we can attempt a more precise definition of hiding. The time intervals that will be required are:Th--the time of the hiding event;Ts--the time that Y is expected to see the book;Tbl--the time when X believes Y will see the book during "l's, which must be BEFORE "l'h;Tb3--the time when X believes Y will not see the book during Ts, which must be BEI"ORE or DURING Th and AI"I'I'~R Tbl.We will now define the predicate H I D I. '(agent, observer, object, a~t) which asserts that act is an action of hiding. Since it describes an action, we have the simple axiom capturing agency: (forall agent, observer, obJect, act H I D l:'(agent, observer, object, act) =) (Exists e ACAUSE(agent, e, act)))l.et us also introduce an event predicate S E l:'(agent, object, e) which asserts that e is an event consisting of agent seeing the object.Now we can define HIDE as follows: (forall ag, obs, o.a. 77z, HIDl'.'(ag.obs, o, a) (obs, o,e) and the intervals Th, Ts, Tb], Tb3 are related as discussed above. Condition (4) defines e as a seeing event, and might also need to be within ag's beliefs.This definition is lacking part of our analysis; namely that there is no mention that the agent's beliefs changed because of something s/he did. We can assert that the agent believes (between Tbl and Tb3) he or she will do an action (between Tbl and Th) as follows: (existx" al, el, Tb2 5) ACAUSlf(a&el, aD 6) H O LDS(believes(ag, OCC UR(al, Tal) ), Tb2) where 7"b1 ( Tb2 ( Tb3 and Tbl (But this has not caused the change in (3) are true, asserting Tal ( Tit captured the notion that belief (6) belief from (2) to (3). Since (6) and a logical implication from (6) to (3) would have no force. It is essential that the belief (6) be a key-element in the reasoning that leads to belief (3).To capture this we must introduce a notion of causality. This notion differs from ACAUSE in many ways (e.g. see [Taylor, 1966] ), but for us the major difference is that, unlike ACAUSE, it suggests no relation to intentionality. While ACAUSE relates an agent to an event, CAUSE relates events to events. The events in question here would be coming to the belief (6), which CAUSES coming to the belief (3).One can see that much of what it means to hide is captured by the above. In particular, the following can be extracted directly from the definition: -if you hide something, you intended to hide it, and thus can be held responsible for the action's consequences;one cannot hide something if it were not possible that it could be seen, or if it were certain that it would be seen anyway; -one cannot hide something simply by changing one's mind about whether it will be seen.In addition, there ate many other possibilities related to the temporal order of events. For instance, you can't hide something by performing an action after ,,he hiding is supposed to be done.I have introduced a representation for events and actions that is based on an interval-based temporal logic. This model is sufficiently powerful to describe events and actions that involve change, as well as those that involve maintaining a state. In addition, the model readily allows the composition and modification of events and actions.In order to demonstrate the power of the model, the action of hiding was examined in detail. This forced the introduction of the notions of belief, intention, and causality. While this paper does not suggest any breakthroughs in representing these three concepts, it does suggest how they should interact with the notions of time, event, and action.At present, this action model is being extended so that reasoning about performing actions can be modeled. This work is along the lines described in [Goldman, 1970] .
:
The formalism for actions used in most natural language understanding systems is based on case grammar. Each action is represented by a set of assertions about the • semantic roles the noun phrases play with respect to the verb. Such a tbrmalism is a start, but does not explain how to represent what an action actually signifies. If one is told that a certain action occurred, what does one know about how the world changed (or didn't change!). This paper attempts to answer this question by oudining a temporal logic in which the occurrence of actions can be tied to descriptions of the world over time.One possibility for such a mechanism is found in the work on problem-solving systems (e.g. [I:ikes and Nilsson, 197] ; Sacerdoti, 1975] ), which suggests one common formulation of action. An acuon is a function from one world state to a succeeding world state and is described by a set of prerequisites and effects, or by decomposition into more primitive actions. While this model is extremely useful for modeling physical actions by a single actor, it does not cover a large class of actions describable in I-ngiish. [:or instance, many actions seemingly describe nml-activity (e.g. standing still), or acting in some nonspecified manner to preserve a state (e.g. preventing your televismn set from being stolen). Furthermore, many action descriptions appear to be a composition of simpler actions that are simultaneously executed. For instance, "Walking to the store while juggling three bails" seems to be composed of the actions of "walking to the store and "juggling three bails."It is not clear how such an action could be defined from the two simpler actions if we view actions as functions from one state to another.The approach suggested here models events simply as partial descriptions of the world over some Lime interval. Actions are then defined as a subclass of events that involve agents. Thus, it is simple to combine two actions into a new action, The new description simply consists of the two simpler descriptions hglding over the same intervalThe notions of prerequisite, result, and methods of performing actions will not arise in this study. While they are iraportant for reasoning about how to attain goals, they don't play an explicit role in defining when an action can be said to have occurred. To make this point clear, consider the simple action of turning on a light.There are few physical activities that are a necessary part of performing this action, Depending on the context, vastly different patterns or" behavior can be classified as the same action, l;or example, turning on a light usually involves Hipping a light switch, but in some circumstances it may involve tightening the light bulb (in the basement). or hitting the wail (m an old house). Although we have knowledge about how the action can be pertbrmed, this does nol define what the action is. The key defining characteristic of turning on the light seems to be that the agent is performing some activity which will cause the light, which is off when the action starts, to become on when the action ends. The importance of this observation is that we could recognize an observed pattern of activity as "turning on the light" even if we had never seen or thought about that pattern previously. The model described here is in many ways similar to that of Jackendoff [1976] . He provides a classification of event verbs that includes verbs of change (GO verbs) and verbs that assert a state remaining constant over an interval of time (STAY verbs), and defines a representation of action verbs of both typesby introducing the notion of agentive causality and permission. However, Jackendoff does not consider in detail how specific actions might be precisely defined with respect to a world model. The next two sections of this paper will introduce the temporal logic and then define the framework for defining events and actions. To be as precise as possible, I have remained within the notation of the first order predicate calculus• Once the various concepts are precisely defined, the next necessary step in this work is to define a computaUonally feasible representation and inference process, Some of this work has already been done. For example, a computational model of the temporal logic can be found in Allen [198.1] • Other areas axe currently under investigation.The final section demonstrates the generality of the approach by analyzing the action of hiding a book from someone. In this study, various other important conceptual entities such as belief, intention, and causality are briefly discussed. Finally, a definition of.what it means to hide something is presented using these tools.
Appendix:
| null | null | null | null | {
"paperhash": [
"grosz|utterance_and_objective:_issues_in_natural_language_communication",
"allen|maintaining_knowledge_about_temporal_intervals",
"brachman|taxonomy,_descriptions,_and_individuals_in_natural_language_understanding",
"schmidt|understanding_human_action",
"perlis|language,_computation,_and_reality",
"wilensky|understanding_goal-based_stories",
"sacerdoti|a_structure_for_plans_and_behavior",
"jackendo|toward_an_explanatory_semantic_representation",
"bruce|belief_systems_and_language_understanding"
],
"title": [
"Utterance and Objective: Issues in Natural Language Communication",
"Maintaining knowledge about temporal intervals",
"Taxonomy, Descriptions, and Individuals in Natural Language Understanding",
"Understanding Human Action",
"Language, computation, and reality",
"Understanding Goal-Based Stories",
"A Structure for Plans and Behavior",
"Toward an Explanatory Semantic Representation",
"Belief systems and language understanding"
],
"abstract": [
"Two premises, reflected in the title, underlie the perspective from which I will consider research in natural language processing in this article. First, progress on building computer systems that process natural languages in any meaningful sense (i.e., systems that interact reasonably with people in natural language) requires considering language as part of a larger communicative situation. Second, as the phrase “utterance and objective” suggests, regarding language as communication requires consideration of what is said literally, what is intended, and the relationship between the two.",
"An interval-based temporal logic is introduced, together with a computationally effective reasoning algorithm based on constraint propagation. This system is notable in offering a delicate balance between",
"KLONE i s a g e n e r a l p u r p o s e language f o r r e p r e s e n t i n g conceptual information. Several of its pr~linent features semantically clean inheritance of structured descriptions, taxonomic classification of gpneric knowledge, intensional structures for functional roles (including the possibility of multiple fillers), and procedural attachment (with automatic invocation) make it particularly useful in computer-based natural language understanding. We have implemented a prototype natural language system that uses KLONE extensively in several facets of its operation. This paper describes the system and points out some of the benefits of using KLONE for representation in natural language processing.",
"Wittgenstein has said, \"If a lion could talk, we could not understand him\" (1958). The point of this rather cryptic comment is undoubtedly Wittgenstein's contention that language or \"language games\" are embedded in what he termed \"forms of life.\" That is, we are able to understand each other not just because we share common knowledge about the syntactic and semantic conventions for the use of words, but also because we share common knowledge about the forms of life or social reality within which we live and act. Wittgenstein's remarkable lion would presumably not share our social reality nor we have knowledge of the lion's social reality. Consequently, Wittgenstein would contend that this lion's exhibition of speech would not result in our being able to communicate with him nor he with us.",
"The main theme of this thesis is the interplay of assertion and meaning, or quotation and un-quotation, in reasoning entities. This is motivated largely by analysis of the notion of possibility in several contexts, most specifically, in relation to resource-limited computational models of belief and inference, as well as in philosophy of science. \nA first-order treatment of quotation and un-quotation is given that allows broad and paradox-free expression of syntax and semantics. It is argued that this makes unnecessary the usual hierarchical constructions for notions such as default reasoning, theory subsumption, concepts, beliefs, and self-reference, and indeed that even greater expressive power is achieved than in those treatments, with reduced complexity of notation. \nThis is then applied to a model of belief and inference in which focus of attention is a key element. Effort is made to isolate certain automatic inferences apparently part of the very meaning of propositional beliefs, and then base more sophisticated thinking on these. \nFinally, some thoughts are presented on how resource-limited computation may bear on the notion of possibility in foundations of physics and modal logic.",
"Abstract : Reading requires reasoning. A reader often needs to infer connections between the sentences of a text and must therefore be capable of reasoning about the situations to which the text refers. People can reason about situations because they posses a vast store of knowledge which they can use to infer implicit parts of a situation from those aspects of the situation explicitly described by a text. PAM (Plan Applier Mechanism) is a computer program that understands stories by reasoning about the situations they reference. PAM reads stories in English and produces representations for the stories that include the inferences needed to connect each story's events. To demonstrate that it has understood a story, PAM answers questions about the story and expresses the story from several points of view. PAM reasons about the motives of a story's characters. Many inferences needed for story understanding are concerned with finding explanations for events in the story. PAM has a great deal of knowledge about people's goals which it applies to find explanations for the actions taken by a story's characters in terms of that character's goals and plans.",
"Abstract : This report describes progress to date in the ability of a computer system to understand and reason about actions. A new method of representing actions within a computer's memory has been developed, and this new representation, called the \"procedural net,\" has been employed in developing new strategies for solving problems and monitoring the execution of the resulting solutions. A set of running computer programs, called the NOAH (Nets Of Action Hierarchies) system, embodies this representation. Its major goal is to provide a framework for storing expertise about the actions of a particular task domain, and to impart that expertise to a human in the cooperative achievement of nontrivial tasks. A problem is presented to NOAH as a statement that is to be made true by applying a sequence of actions in an initial state of the world. The actions are drawn from a set of actions previously defined to the system. NOAH first creates a one-step solution to the problem, then it progressively expands the level of detail of the solution, filling in ever more detailed actions. All the individual actions, composed into plans at differing levels of detail, are stored in the procedural net. The system avoids imposing unnecessary constraints on the order of the actions in a plan. Thus, plans are represented as partial orderings of actions, rather than as linear sequences. The same data structure is used to guide the human user through a task. Since the system has planned the task at varying levels of detail, it can issue requests for action to the user at varying levels of detail, depending on his/her competence and understanding of the higher level actions. If more detail is needed than was originally planned for, or if an unexpected event causes the plan to go awry, the system can continue to plan from any point during execution. In essence, the structure of a plan of actions is as important for problem solving and execution monitoring as the nature of the actions themselves.",
"An entertainment apparatus is described for simulating the basic play options of the game of college football. The apparatus includes a gameboard to simulate a football field and includes thereon: space at each end for run or pass options to be indicated thereupon, a football-shaped ball position yardline marker, a first down indicator, an individual down indicator, and two play designation markers. Two decks of cards provide a plurality of play situations whereby two persons representing offensive and defensive quarterbacks can match wits to move a simulated football back and forth upon the gameboard. A multiplicity of down cards are controlled by the player starting on the offense while a multiplicity of kick cards are controlled by the defensive player. The offensive player, using a shield to hide his choice from view, chooses either a run or a pass and places a play designation marker on the appropriate space on the gameboard to so indicate. The defensive player, on a signal, then attempts to anticipate the offensive play. The outcome is determined by comparing the offensive call with the defensive guess. Correct defensive guesses are statistically more favorable for the defensive player while incorrect defensive guesses favor the offensive player.",
"Abstract : The paper discusses some of the 'belief systems knowledge' used in language understanding. It begins with a presentation of a theory of personal causation. The theory supplies the tools to account for purposeful behavior. Using primitives of the theory the social aspect of an action can be described. The social aspect is that which depends on beliefs and intentions. Patterns of behavior, called 'social action paradigms' (SAP's), are then defined in terms of social actions. The SAP's provide a structure for episodes analogous to the structure a grammar provides for sentences."
],
"authors": [
{
"name": [
"B. Grosz"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"James F. Allen"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"R. Brachman"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"C. Schmidt"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"D. Perlis"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"R. Wilensky"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"E. Sacerdoti"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"R. Jackendo"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"Bertram C. Bruce"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
}
]
}
],
"arxiv_id": [
null,
null,
null,
null,
null,
null,
null,
null,
null
],
"s2_corpus_id": [
"36758958",
"16729000",
"13484170",
"3911097",
"60553326",
"9899836",
"60729110",
"59802720",
"118337048"
],
"intents": [
[],
[],
[],
[],
[],
[],
[],
[
"background",
"result"
],
[]
],
"isInfluential": [
false,
false,
false,
false,
false,
false,
false,
false,
false
]
} | - Problem: The paper aims to address the types of knowledge required to reason about actions, focusing on the conditions under which an action can be considered to have taken place.
- Solution: The paper proposes a formalism involving models of time, belief, and intention to define general actions and events, using the example of hiding a book as a motivating case study. | 524 | 0.028626 | null | null | null | null | null | null | null | null |
b02a5d620bccc37ad7d784b7f26a8bbf0af41870 | 10832195 | null | Analogies in Spontaneous Discourse | This paper presents an analysis of analogies based on observations of oatural conversations. People's spontaneous use of analogies provides Inslg~t into their implicit evaluation procedures for analogies. The treatment here, therefore, reveals aspects of analogical processing that is somewhat more difficult to see in an experimental context. The work involves explicit treatment of the discourse context in which analogy occurs. A major focus here is the formalization of the effects of analogy on discourse development. There is much rule-llke behavior in this process, both in underlying thematic development of the discourse and in the surface lir~ulstlc forms used in this development. Both these forms of regular behavior are discussed in terms of a hierarchical structurin6 of a discourse into distinct, but related and linked, context spaces. | {
"name": [
"Reichman, Rachel"
],
"affiliation": [
null
]
} | null | null | 19th Annual Meeting of the Association for Computational Linguistics | 1981-06-01 | 25 | 5 | null | People's use of analogies in conversation reveals a rich set of processing strategies.Consider the following example.A: B:I. I think if you're going to marry someone in the 2. Hindu tradition, you have to -Well, you -They 3. say you give money to the family, to the glrl, 4. but in essence, you actually buy her.5. It's the same in the Western tradition. You 6. know, you see these greasy fat millionaires going 7. around with film stars, right? They've 8. essentially bought them by their status (?money). 9 . HO, but, there, the woman is selllng herself. 10. In these societies, the woman isn't selling 11. herself, her parents are selling her.There are several interesting things happening in this exchange. For example, notice that the analogy is argued and discussed by the conversants, and that in the arEumentatlon C uses the close discourse deictlo "these" tO refer to the in~tlatlng subject of the a~alogy, and that she uses the far discourse delctlo "there" to refer to the linearly closer analogous utterances. In addition, notice that C bases her rejection ca a noncorrespondence of relations effectlng the relation claimed constant between the two domains (women hei~ sold).She does not simply pick any arbitrary noncorrespondence between the two domains.In the body of this paper, I address and develop these types of phenomena accompanying analogies in naturally ongoing discourse.The body of the paper is divided into four sections. First a theoretic framework for discourse is presented. This is followed by some theoretic work on analo~es, an integration of this work with the general theory of discourse proposed here, and an illuntratlon of how the II would llke to thank Dedre Gentner for many useful comments end discussions.integration of the different approaches explicates the issues under discussion.In the last section of the paper, I concentrate on some surface llngulstlo phenomena accompanying a oonversant's use of analogy in spontaneous discourse.A close analysis of spontaneous dialogues reveals that discourse processing is focused and enabled by a conversant's ability to locate ~ single frame of reference [19, 15, 16] for the discussion.In effective communication, listeners are able to identify such a frame of reference by partitioning discourse utterances into a hierarchical organization of distinct but related and linked context snaces. At any given point, only some of these context spaces are in the foreground of discourse.Foreg~ounded context spaces provide the ~eeded reference frame for subsequent discussion.abstract process model of discourse generation/interpretation incorporatlng a hierarchical view of discourse has been designed using the formalism of an Augmented Transition Network (ATN) [29] 2 .The ~Ta~r encoding the context space theory [20, 22] views a conversation as a sequence of conversatlooal moves. Conversational moves correspond to a speaker's communioatlve goal vis-A-vis a particular preceding section of discourse.Among the types of conversational moves -speaker communicative goals -formalized in the grammar are:Challenge, Support, Future-Generallzation, and Further-Development.The correlation between a speakerPs utterances and a speaker's communicative goal in the context space grammar is somewhat s~m~lar to a theory of speech acts A la Austin, Searle, and Grloe [I, 2q, 9] .As in the speech act theory, a speaker's conversatloral move is recognized as a functional communicative act [q] with an associated set of preconditions, effects, and mode of fulfillment. However, in the context space approach, the acts recognlzed are specific to maxlm-abldlng thematic conversational development, and their preconditions and effects stem from the discourse structure (rather than from/on arbitrary states in the external world).All utterances that serve the fulfillment of a slng~le communicative goal are partitloned into a single discourse unit -called a context space. A context space characterizes the role that its various parts play In the overall discourse structure and it explicates features relevant to "well-formedness" and "maxim-abiding" discourse development. ~ine types of context spaces have been formalized in the grammar representing the different constituent types of a discourse.The spaces are characterized in much the same way as elements of a • Systemic Grammar" A la Halllday [10] via attributes represented as "slots" per Minsky [I~] .All context spaces have slots for the followlng elements:2The rules incorporated in the grammar by themselves do not form a complete system of discourse generation/inter pretatlon.Rather, they enable specification of a set of high level Semantlc/log~Ical constraints that a surface lln~istlc from has to meet in order to fill a certain maxlm-abidlng conversational role at a given point in the discourse.o a propositional representation of the set of functionally related utterances said to iie An the space;o the communicative goal served by the space; o a marker reflecting the influential status of the space at any given point in the discourse;o links to preceding context spaces in relatlon to which this context space wan developed;o specification at the relations involved.An equally important feature of a context space are its slots that hold the inferred components needed to recognize the communicative goal that the space serves in the discourse context. There are various ways to fulfill a given communicative goal, and usually, dependent on the mode of fulfillment and the goal in question, one can characterize a set of standardized implicit components that need to be inferred.For example, as noted by investigators of argumentation (e.g., [~, 23, 5, 22] ), in interpreting a proposition as supporting another, we often need to infer some sot of mappings between an Interred generic principle of support, the stated proposition of support, and the claim being supported. We must also infer some general rule of inference that allows for conclusion a claim given the explicit statements of support and these inferred components.this standardization of inferential elaborations, I have oategorlzed dlfferent types of context spaces based on communicative goal and method fttlftllment charaeterlzatlons (i.e., specification of specific slots needed to hold the standardized inferential elaboratlons particular tO a g~Lven goal and mode of fulfillment).Dellneatioo of context spaces, then, is functlomally based, and in the context space grammar, ImplAclt components of a move are treated an much a part of the discourse as those components verbally expressed.Znterpretlng/understanding an analogy obviously involves some inferenoing ca the part of a listener. An analogous context space, therefore, has some slots particular to it.The grammar's characterization of an analogous context space is derivative from its for~uLl analysis of an analogy oonversatlom-l move. Gentner's analysis can be used to explain B's analogy between the Hindu and Western traditions in Excerpt I. The relation ~ BUYING WOMAN. FOR $0~ COMPANION FUNCTION is held constant between the two doma/ns, and the appropriateness Of the analogy iS not affected, for instance, hy the noncorrespondlng political views and/or religions of the two societies.While Gentner cuts down on the number of correspondences that must exist between two domains for an analogy to be considered good, she still leaves open a rather wide set o£ relations that must seemingly be matched between a base and target domain.We need some. way to further characterize Just those relations that must be mapped. For example, the relation TRADING WITH CHINA is totally irrelevant to the Hindu-Western analogy in this discourse context.As noted by Lakoff & Johnson [12] , metaphors simultaneously "highlight" and "hide" aspects of the two domains being mapped onto each other.The context space theory supplements both Lakoff & Johnson's analysis and the structure-mapplng approach in its ability to provide relevant relation characterization.. The context space grammar's analysis of analogies can be characterized by the following:Explicating the connection between an utterance purportlng to make a claim analogous to another rests on recoghizlng that fc~. two propositions to be analogous, it anst be the cnse that they can bo ~h be seen an ~nstanc,s Of some more general claim, such that the predicates of all three propositions are identloal (i.e., relation identity), and the correspondent objects of the two domains involved are both subsets of some larger sot specified in this more general claim.is based on specifylng some relation, RI, of one domain, that one implies (or claims) is not true in the other; or is based on specifying some non-ldentloal attrlbute-value pair ~'om whloh such a relation, RI, can be inferred.In both cases, RI oust itself stand in a 'CAUSE' relation (or soma other such relatlon 3) with one Of the relations explicitly mentioned in the creation of the analogy (i.e., one being held constant between the two domains, that we csul call RC). Furthermore, it must be the cnse that the communicative goal of the analogy hinges on RI(RC) being true (or not true) in both of the domains.Re£1ectlng this analysis of ~--!o~Les, all analogous context spaces have the followlng slot deflnltlons (among others).This slot contains the generic proposltlon, P, of which the Inltlatlng and analogous claim are instances. Reflecting the fact that the same predication must be true of both cla.lms, 3Since aceordin~ to this analysis the prime focal point of the analogy is always the relations (i.e., "actions") being held constant, and a major aspect of an "action" is its cause (reason, intent, or effect of occurrence), a non~orrespondenoe in one of these relations will usually invalidate the point at the analogy.Proposition:Mappings:the predicate in the abstract slot is fixed; other elements of the abstract are variables corresponding to the abstracted clansea of which the specific elements mentionod in the analogous and initiating clalms are members.The structure of this slot, reflecting this importance of relation identity, consists of two subslots:This slot contains a llst of the relations that are constant and true in the two domains. Analogy construction entails a local shift in toplo, and, therefore, in general, a/tar discussion of the analogous space (iscluding its component parts, such as "supports-of," "challenges-of,, etc.), we have immediate resumption of the initiating context space. (When analogies are used for goals ~ & 5 noted above, if the analogy is accepted, then there need not be a return to the initiating space.)In this section, I present an analysis of an excerpt in which convereants spontaneously generate and argue about analog~les.The analysis hiEhlights the efficacy of inteKratlng the structure napping approach with r~e communicative gnal directed approach of the context space theory.The excerpt also illustrates the rule-llke behavior governing continued thematic development of a discourse after an analogy is given. N denies such a presummed negativity by arguing that it is possible to view America's involvement in Vietnam a~ coming to the aid of a country under foreign attack ~ (i.e., as a positive rather than a negative act).Thus, argues N, the "cause" relations of the acts being held constant between the two domain~ (i.e., enteran~e as a police force but being partisan) are quite different in the two cases.And, in the Vietnam case, the cause of the act obviates any common negativity associated with such "unfair police force treatment."There is no negativity of America to map onto England, and the whole purpose of the analogy has failed.Hence, according to 5Rq can be thought of as another way of loo~.ing at R1 and R2.Alternatively, it could be thought of as replacing RI and R2, since when one country invades another, we do ~ot usually co~slder third party intervention as mere "coming in as a polio= force and taklng the slde of," but rather as an entrance into an ongoing war.However, I think in one light one oen view the relations of 81 and R2 holding in either an internal or external war.6Most criticisms of America's involvement in Vietnam rest on viewing it as an act of intervention in the internal affairs of a country agalnst the will of half oE its people.H, the analogy in thls discourse context is vacuous an~ warrants rejectlon 7. After N's rejection o~ M'e analogy, and N's offering o~" an alternative analogy , which is somewhat accepted by M ~ as predicted by the gr--~r's analysis of an analogy conversational move u~ed for purposes or evaluatlon/Justlficatlon, it is time to have ~he initiating subject of the analogy returned-to (i.e., i~ is time to return to the subject of Br~Italn's moving into Ireland)The return, on Line 8N'= citing of this alternative analogy is supportive of the grammar's analysis that the purpose of an analogy is vital to Its acceptance, slope, it happens that N views Syria's intervention in Lebanon quite negatively: thus, her cho£ce of this domain where (An her view) is=re is plenty of negativity to ~p.by the way, that in tsr'~a of "at~ribute identity," Amities is a =mob closer latch to England than Syria la. This example supports the theory that "attribute identity" play= a milLimal role in analogy ~appings.10The fact that M attempts to map a "cause" relation between the two domains, further supports the theory that it is correspondence of sohesatizatlon or relations between dosmins, rather than object identity, that is a governing criteria in analogy construction and evaluation.The rules of reference encoded in the context space grammar do not complement traditional pronominallzatlon theories which are based on criteria of recency and resulting potential semantic amblguities.Rather, the rules are more in llne with the theory proposed by Olson who states that "words designate, signal, or specify an intended referent relative to the set of alternatives from which it must be differentiated" [17, p.26~] . The context space grammar is able to delineate this set of alternatives governlng a speaker's choice (and listener's resolution) of a referring expresslon ;I by continually updating its model of the discourse based on its knowledge of the effects associated with different types of conversational moves. create the discourse expectation that upon completion of the analogy, discussion of the initiating context space will be resummed (except in cases of communicative goals q and 5 noted above).Endin~ an analogy conversational move, makes available to the grammar the "Resume-lnitlatlng" discourse expectation, created when the analogy was first generated.The effects of choosing this discourse expactation are to:11Lacking from thls theory, however, but hopefully to be included at a later date, is Webber's notion of evoked entities [27] (i.e., entities not previously mentioned in the discourse but which are derivative from itespecially, quantified sets).o Close the analogous context space (denoting that the space no longer plays a foreground discourse role); o reinstantlate the initiating context space as Active.Excerpt 3 illustrates how the grammar's rule of reference and its updating actions for analo@les explain some seeming surprising surface linguistic forms used after an analo~ in the discourse. The excerpt is taken from an informal conversation between two friends.In the discussion, G is explaining to J the workings of a particle accelerator.Under current discussion is the cavity of the accelerator through which protons are sent and accelerated. Particular attention should be given to G's referring expressions on Line 8 of the excerpt. On Line 9, G refers to the "electrostatic potential" last mentioned on Line 3. with the unmodified, close deictlc referring expression 12 "the potential," despite the fact that lntervening~ty on Line 5 he had referenced • gravitational potential," a potential semantic contender for the unmodified noun phrase.In addition, G uses the close delctic "here" to refer to context space CI, though in terms of linear order, context space C2, the analogous context space, is the closer context space.Both these surface linguistic phenomena are explainable and predictable by the context space theory.Line 8 fulfills the discourse expectation of resummlr~ discussion of the initiating context space of the analogy.As noted, the effects of such a move are to close the analogous context space (here, C2) and to reassign the initiating space (here, CI) an active status.As noted, only elements of an active or controlling context space are viable contenders for pronominal and close deictlc references; elements of closed context spaces are not.Hence, despite criteria of recency and resulting potentials of semantic ambiguity, G's references unambiguously refer to elements of CI, the active foregrounded context space in the discourse model.As a second example of speakers ualng close deictlcs to refer to elements of the initiating context space of an analogy, and corresponding use of far deictics for elements of the analogous space, lets re-consider Excerpt 1, repeated below.12Th e grammar considers nThe X" a close deictlc reference as it is often used as a comple~ment to "That X," a clear far deictic expression I. I think if you're going to marry someone in the 2. Hindu tradition, you have to -Well, you -They 3. say you give money to the family, to the glrl, q. but in essence, you actually ~uy her.It's the same in the Western tradition. You 6. know, you see these greasy fat millionaires going 7. around with ~ilm stars, right? They've 8. essentially bought them by their status (?money). 9. No, but, there, the women is selling herself. 10. In these societies, the woman isn't selling 11. herself, her parents are selling her.Lines ; -5: Context Space CI, The Initiating Space. Lines 5 -8: Context Space C3, The Analogous Space. LAnes 9 -11: Context Space C3, The Challenge Space.On Line 9, C rejects B's analogy (as signalled by her use Of the clue words, "t~o, but") by citing a nonoorrespondence of relations between the two domains. Notice that in the rejection, C uses the far daictic • there = to refer to an element of the linearly close analogous context space, C2,t3 and that she uses the clone de~ctlc "these" to refer to an 1~lement~ of the linearly far initiating context space, CI .The grnmm"r models C's move on Line 9 by processing the • Challenge-Analogy-Hap plngs" (CAM) conversational move defined in its discourse network. This move is a subcategory of the grammar' s Challenge move category. Since this type of analogy challenge entails contrasting constituents of both the initiatlng and analogs context spaces'% the grammar must decide which of the two spaces should be in a controlling status, i.e., which space should serve as the frame of reference for subsequent processing.Reflecting the higher influential status of the initiating context space, the grammar chooses it as its reference frame Is.As such, on its transition path for the CAM move, move, the gr-mnutr" 13This conversation was recorded in Switzerland, and in terms of a locative use of delctics, Western society is the closer rather than Hindu society.Thus, the choice of deict£c cannot be explained by appeal to external reference criteria.1~Notlce, however, that C does not use the close " delctlc "here," though it is a better contrastlve term with "there" than is =these."The rule of using close delctlcs seems to be slightly constrained in that if the referent of "here = is a location, and the s~aker is not in the location being referenced, then, s/he cannot use • here."15Zn a different type of analogy challenge, for example, one could simply deny the truth of the smalo~us utterances.16Zn the canes of Pre-Generalizatlon and Topic-Contrast-Shlft analogies, it is only after the analogy has been accepted that the analogous space is allowed to usurp the foreground role of the initiating context space.O puts the currently active context space (i.e., the analogous context space) in a state (reflecting its new background role); c leaves the initiating space in its Controlling state ( I. e., it has been serving as the reference frame for the analogy); o creates a new Active context space in which to put the challenge about to be put forward.Performing such u~latlng actions, and using £ts rule that only elements in a controlling or active space are viable contenders for close delotlc and pronominal references, enables the grammar to correctly model, explain, and predict C's reference forms on Lines 9 11 of the excerpt. | null | null | null | In this paper I have offered a treatment of analogies within spontaneous dlalo6ues.In order to do thls I first proposed a context space model of discourse.In the model discourse utterances are partitioned into discrete discourse units based on the communicative goal that they serve in the discussion.All communicative acts effect the precedlng discourse context and I have shown that by tracking these effects the grammar can specify a frame of reference for subsequent discussion. Then, a structure-~applng approach tO analogies was discussed. In this approach it is claimed that the focus of an analogy is on system~ of relatlonships between objects, rather than on attributes of objects. Analysis of naturally occurring analogies supported this claim. I then showed that the context space theory's communicative goal analysis of discourse enabled the theory to go beyond the struoture-mapplng approach by providing a further specification of waich klnds of relationships are most likely to be Included in description of an analogy.• Lastly, Z presented a number of excerpts taken from naturally ongoing discourse and showed how the context space analysis provided a cogent explanation for the types Of analogies found in dlsoouree, the types Of reJemt£ons given tO them, the rule-like thematic development of a dlsoourse after an a~alogy, and the surface llngulstlc forms used in these development.In conclusion, analyzing speakers spontaneous generation of analogies and other conversants' reaotlons to them, provides ua an usually direct form by which access individuals' implicit criteria for analogies.These exchanges reveal what conversants believe analogies are responsible for and thereby what i~ormatlon they need to convey. | Main paper:
introduction:
People's use of analogies in conversation reveals a rich set of processing strategies.Consider the following example.A: B:I. I think if you're going to marry someone in the 2. Hindu tradition, you have to -Well, you -They 3. say you give money to the family, to the glrl, 4. but in essence, you actually buy her.5. It's the same in the Western tradition. You 6. know, you see these greasy fat millionaires going 7. around with film stars, right? They've 8. essentially bought them by their status (?money). 9 . HO, but, there, the woman is selllng herself. 10. In these societies, the woman isn't selling 11. herself, her parents are selling her.There are several interesting things happening in this exchange. For example, notice that the analogy is argued and discussed by the conversants, and that in the arEumentatlon C uses the close discourse deictlo "these" tO refer to the in~tlatlng subject of the a~alogy, and that she uses the far discourse delctlo "there" to refer to the linearly closer analogous utterances. In addition, notice that C bases her rejection ca a noncorrespondence of relations effectlng the relation claimed constant between the two domains (women hei~ sold).She does not simply pick any arbitrary noncorrespondence between the two domains.In the body of this paper, I address and develop these types of phenomena accompanying analogies in naturally ongoing discourse.The body of the paper is divided into four sections. First a theoretic framework for discourse is presented. This is followed by some theoretic work on analo~es, an integration of this work with the general theory of discourse proposed here, and an illuntratlon of how the II would llke to thank Dedre Gentner for many useful comments end discussions.integration of the different approaches explicates the issues under discussion.In the last section of the paper, I concentrate on some surface llngulstlo phenomena accompanying a oonversant's use of analogy in spontaneous discourse.
the context space theory of discourse:
A close analysis of spontaneous dialogues reveals that discourse processing is focused and enabled by a conversant's ability to locate ~ single frame of reference [19, 15, 16] for the discussion.In effective communication, listeners are able to identify such a frame of reference by partitioning discourse utterances into a hierarchical organization of distinct but related and linked context snaces. At any given point, only some of these context spaces are in the foreground of discourse.Foreg~ounded context spaces provide the ~eeded reference frame for subsequent discussion.abstract process model of discourse generation/interpretation incorporatlng a hierarchical view of discourse has been designed using the formalism of an Augmented Transition Network (ATN) [29] 2 .The ~Ta~r encoding the context space theory [20, 22] views a conversation as a sequence of conversatlooal moves. Conversational moves correspond to a speaker's communioatlve goal vis-A-vis a particular preceding section of discourse.Among the types of conversational moves -speaker communicative goals -formalized in the grammar are:Challenge, Support, Future-Generallzation, and Further-Development.The correlation between a speakerPs utterances and a speaker's communicative goal in the context space grammar is somewhat s~m~lar to a theory of speech acts A la Austin, Searle, and Grloe [I, 2q, 9] .As in the speech act theory, a speaker's conversatloral move is recognized as a functional communicative act [q] with an associated set of preconditions, effects, and mode of fulfillment. However, in the context space approach, the acts recognlzed are specific to maxlm-abldlng thematic conversational development, and their preconditions and effects stem from the discourse structure (rather than from/on arbitrary states in the external world).All utterances that serve the fulfillment of a slng~le communicative goal are partitloned into a single discourse unit -called a context space. A context space characterizes the role that its various parts play In the overall discourse structure and it explicates features relevant to "well-formedness" and "maxim-abiding" discourse development. ~ine types of context spaces have been formalized in the grammar representing the different constituent types of a discourse.The spaces are characterized in much the same way as elements of a • Systemic Grammar" A la Halllday [10] via attributes represented as "slots" per Minsky [I~] .All context spaces have slots for the followlng elements:2The rules incorporated in the grammar by themselves do not form a complete system of discourse generation/inter pretatlon.Rather, they enable specification of a set of high level Semantlc/log~Ical constraints that a surface lln~istlc from has to meet in order to fill a certain maxlm-abidlng conversational role at a given point in the discourse.o a propositional representation of the set of functionally related utterances said to iie An the space;o the communicative goal served by the space; o a marker reflecting the influential status of the space at any given point in the discourse;o links to preceding context spaces in relatlon to which this context space wan developed;o specification at the relations involved.An equally important feature of a context space are its slots that hold the inferred components needed to recognize the communicative goal that the space serves in the discourse context. There are various ways to fulfill a given communicative goal, and usually, dependent on the mode of fulfillment and the goal in question, one can characterize a set of standardized implicit components that need to be inferred.For example, as noted by investigators of argumentation (e.g., [~, 23, 5, 22] ), in interpreting a proposition as supporting another, we often need to infer some sot of mappings between an Interred generic principle of support, the stated proposition of support, and the claim being supported. We must also infer some general rule of inference that allows for conclusion a claim given the explicit statements of support and these inferred components.this standardization of inferential elaborations, I have oategorlzed dlfferent types of context spaces based on communicative goal and method fttlftllment charaeterlzatlons (i.e., specification of specific slots needed to hold the standardized inferential elaboratlons particular tO a g~Lven goal and mode of fulfillment).Dellneatioo of context spaces, then, is functlomally based, and in the context space grammar, ImplAclt components of a move are treated an much a part of the discourse as those components verbally expressed.
the analogy conversational move:
Znterpretlng/understanding an analogy obviously involves some inferenoing ca the part of a listener. An analogous context space, therefore, has some slots particular to it.The grammar's characterization of an analogous context space is derivative from its for~uLl analysis of an analogy oonversatlom-l move. Gentner's analysis can be used to explain B's analogy between the Hindu and Western traditions in Excerpt I. The relation ~ BUYING WOMAN. FOR $0~ COMPANION FUNCTION is held constant between the two doma/ns, and the appropriateness Of the analogy iS not affected, for instance, hy the noncorrespondlng political views and/or religions of the two societies.While Gentner cuts down on the number of correspondences that must exist between two domains for an analogy to be considered good, she still leaves open a rather wide set o£ relations that must seemingly be matched between a base and target domain.We need some. way to further characterize Just those relations that must be mapped. For example, the relation TRADING WITH CHINA is totally irrelevant to the Hindu-Western analogy in this discourse context.As noted by Lakoff & Johnson [12] , metaphors simultaneously "highlight" and "hide" aspects of the two domains being mapped onto each other.The context space theory supplements both Lakoff & Johnson's analysis and the structure-mapplng approach in its ability to provide relevant relation characterization.. The context space grammar's analysis of analogies can be characterized by the following:Explicating the connection between an utterance purportlng to make a claim analogous to another rests on recoghizlng that fc~. two propositions to be analogous, it anst be the cnse that they can bo ~h be seen an ~nstanc,s Of some more general claim, such that the predicates of all three propositions are identloal (i.e., relation identity), and the correspondent objects of the two domains involved are both subsets of some larger sot specified in this more general claim.is based on specifylng some relation, RI, of one domain, that one implies (or claims) is not true in the other; or is based on specifying some non-ldentloal attrlbute-value pair ~'om whloh such a relation, RI, can be inferred.In both cases, RI oust itself stand in a 'CAUSE' relation (or soma other such relatlon 3) with one Of the relations explicitly mentioned in the creation of the analogy (i.e., one being held constant between the two domains, that we csul call RC). Furthermore, it must be the cnse that the communicative goal of the analogy hinges on RI(RC) being true (or not true) in both of the domains.Re£1ectlng this analysis of ~--!o~Les, all analogous context spaces have the followlng slot deflnltlons (among others).This slot contains the generic proposltlon, P, of which the Inltlatlng and analogous claim are instances. Reflecting the fact that the same predication must be true of both cla.lms, 3Since aceordin~ to this analysis the prime focal point of the analogy is always the relations (i.e., "actions") being held constant, and a major aspect of an "action" is its cause (reason, intent, or effect of occurrence), a non~orrespondenoe in one of these relations will usually invalidate the point at the analogy.Proposition:Mappings:the predicate in the abstract slot is fixed; other elements of the abstract are variables corresponding to the abstracted clansea of which the specific elements mentionod in the analogous and initiating clalms are members.The structure of this slot, reflecting this importance of relation identity, consists of two subslots:This slot contains a llst of the relations that are constant and true in the two domains. Analogy construction entails a local shift in toplo, and, therefore, in general, a/tar discussion of the analogous space (iscluding its component parts, such as "supports-of," "challenges-of,, etc.), we have immediate resumption of the initiating context space. (When analogies are used for goals ~ & 5 noted above, if the analogy is accepted, then there need not be a return to the initiating space.)In this section, I present an analysis of an excerpt in which convereants spontaneously generate and argue about analog~les.The analysis hiEhlights the efficacy of inteKratlng the structure napping approach with r~e communicative gnal directed approach of the context space theory.The excerpt also illustrates the rule-llke behavior governing continued thematic development of a discourse after an analogy is given. N denies such a presummed negativity by arguing that it is possible to view America's involvement in Vietnam a~ coming to the aid of a country under foreign attack ~ (i.e., as a positive rather than a negative act).Thus, argues N, the "cause" relations of the acts being held constant between the two domain~ (i.e., enteran~e as a police force but being partisan) are quite different in the two cases.And, in the Vietnam case, the cause of the act obviates any common negativity associated with such "unfair police force treatment."There is no negativity of America to map onto England, and the whole purpose of the analogy has failed.Hence, according to 5Rq can be thought of as another way of loo~.ing at R1 and R2.Alternatively, it could be thought of as replacing RI and R2, since when one country invades another, we do ~ot usually co~slder third party intervention as mere "coming in as a polio= force and taklng the slde of," but rather as an entrance into an ongoing war.However, I think in one light one oen view the relations of 81 and R2 holding in either an internal or external war.6Most criticisms of America's involvement in Vietnam rest on viewing it as an act of intervention in the internal affairs of a country agalnst the will of half oE its people.H, the analogy in thls discourse context is vacuous an~ warrants rejectlon 7. After N's rejection o~ M'e analogy, and N's offering o~" an alternative analogy , which is somewhat accepted by M ~ as predicted by the gr--~r's analysis of an analogy conversational move u~ed for purposes or evaluatlon/Justlficatlon, it is time to have ~he initiating subject of the analogy returned-to (i.e., i~ is time to return to the subject of Br~Italn's moving into Ireland)The return, on Line 8N'= citing of this alternative analogy is supportive of the grammar's analysis that the purpose of an analogy is vital to Its acceptance, slope, it happens that N views Syria's intervention in Lebanon quite negatively: thus, her cho£ce of this domain where (An her view) is=re is plenty of negativity to ~p.by the way, that in tsr'~a of "at~ribute identity," Amities is a =mob closer latch to England than Syria la. This example supports the theory that "attribute identity" play= a milLimal role in analogy ~appings.10The fact that M attempts to map a "cause" relation between the two domains, further supports the theory that it is correspondence of sohesatizatlon or relations between dosmins, rather than object identity, that is a governing criteria in analogy construction and evaluation.The rules of reference encoded in the context space grammar do not complement traditional pronominallzatlon theories which are based on criteria of recency and resulting potential semantic amblguities.Rather, the rules are more in llne with the theory proposed by Olson who states that "words designate, signal, or specify an intended referent relative to the set of alternatives from which it must be differentiated" [17, p.26~] . The context space grammar is able to delineate this set of alternatives governlng a speaker's choice (and listener's resolution) of a referring expresslon ;I by continually updating its model of the discourse based on its knowledge of the effects associated with different types of conversational moves. create the discourse expectation that upon completion of the analogy, discussion of the initiating context space will be resummed (except in cases of communicative goals q and 5 noted above).Endin~ an analogy conversational move, makes available to the grammar the "Resume-lnitlatlng" discourse expectation, created when the analogy was first generated.The effects of choosing this discourse expactation are to:11Lacking from thls theory, however, but hopefully to be included at a later date, is Webber's notion of evoked entities [27] (i.e., entities not previously mentioned in the discourse but which are derivative from itespecially, quantified sets).o Close the analogous context space (denoting that the space no longer plays a foreground discourse role); o reinstantlate the initiating context space as Active.Excerpt 3 illustrates how the grammar's rule of reference and its updating actions for analo@les explain some seeming surprising surface linguistic forms used after an analo~ in the discourse. The excerpt is taken from an informal conversation between two friends.In the discussion, G is explaining to J the workings of a particle accelerator.Under current discussion is the cavity of the accelerator through which protons are sent and accelerated. Particular attention should be given to G's referring expressions on Line 8 of the excerpt. On Line 9, G refers to the "electrostatic potential" last mentioned on Line 3. with the unmodified, close deictlc referring expression 12 "the potential," despite the fact that lntervening~ty on Line 5 he had referenced • gravitational potential," a potential semantic contender for the unmodified noun phrase.In addition, G uses the close delctic "here" to refer to context space CI, though in terms of linear order, context space C2, the analogous context space, is the closer context space.Both these surface linguistic phenomena are explainable and predictable by the context space theory.Line 8 fulfills the discourse expectation of resummlr~ discussion of the initiating context space of the analogy.As noted, the effects of such a move are to close the analogous context space (here, C2) and to reassign the initiating space (here, CI) an active status.As noted, only elements of an active or controlling context space are viable contenders for pronominal and close deictlc references; elements of closed context spaces are not.Hence, despite criteria of recency and resulting potentials of semantic ambiguity, G's references unambiguously refer to elements of CI, the active foregrounded context space in the discourse model.As a second example of speakers ualng close deictlcs to refer to elements of the initiating context space of an analogy, and corresponding use of far deictics for elements of the analogous space, lets re-consider Excerpt 1, repeated below.12Th e grammar considers nThe X" a close deictlc reference as it is often used as a comple~ment to "That X," a clear far deictic expression I. I think if you're going to marry someone in the 2. Hindu tradition, you have to -Well, you -They 3. say you give money to the family, to the glrl, q. but in essence, you actually ~uy her.It's the same in the Western tradition. You 6. know, you see these greasy fat millionaires going 7. around with ~ilm stars, right? They've 8. essentially bought them by their status (?money). 9. No, but, there, the women is selling herself. 10. In these societies, the woman isn't selling 11. herself, her parents are selling her.Lines ; -5: Context Space CI, The Initiating Space. Lines 5 -8: Context Space C3, The Analogous Space. LAnes 9 -11: Context Space C3, The Challenge Space.On Line 9, C rejects B's analogy (as signalled by her use Of the clue words, "t~o, but") by citing a nonoorrespondence of relations between the two domains. Notice that in the rejection, C uses the far daictic • there = to refer to an element of the linearly close analogous context space, C2,t3 and that she uses the clone de~ctlc "these" to refer to an 1~lement~ of the linearly far initiating context space, CI .The grnmm"r models C's move on Line 9 by processing the • Challenge-Analogy-Hap plngs" (CAM) conversational move defined in its discourse network. This move is a subcategory of the grammar' s Challenge move category. Since this type of analogy challenge entails contrasting constituents of both the initiatlng and analogs context spaces'% the grammar must decide which of the two spaces should be in a controlling status, i.e., which space should serve as the frame of reference for subsequent processing.Reflecting the higher influential status of the initiating context space, the grammar chooses it as its reference frame Is.As such, on its transition path for the CAM move, move, the gr-mnutr" 13This conversation was recorded in Switzerland, and in terms of a locative use of delctics, Western society is the closer rather than Hindu society.Thus, the choice of deict£c cannot be explained by appeal to external reference criteria.1~Notlce, however, that C does not use the close " delctlc "here," though it is a better contrastlve term with "there" than is =these."The rule of using close delctlcs seems to be slightly constrained in that if the referent of "here = is a location, and the s~aker is not in the location being referenced, then, s/he cannot use • here."15Zn a different type of analogy challenge, for example, one could simply deny the truth of the smalo~us utterances.16Zn the canes of Pre-Generalizatlon and Topic-Contrast-Shlft analogies, it is only after the analogy has been accepted that the analogous space is allowed to usurp the foreground role of the initiating context space.O puts the currently active context space (i.e., the analogous context space) in a state (reflecting its new background role); c leaves the initiating space in its Controlling state ( I. e., it has been serving as the reference frame for the analogy); o creates a new Active context space in which to put the challenge about to be put forward.Performing such u~latlng actions, and using £ts rule that only elements in a controlling or active space are viable contenders for close delotlc and pronominal references, enables the grammar to correctly model, explain, and predict C's reference forms on Lines 9 11 of the excerpt.
conclusion:
In this paper I have offered a treatment of analogies within spontaneous dlalo6ues.In order to do thls I first proposed a context space model of discourse.In the model discourse utterances are partitioned into discrete discourse units based on the communicative goal that they serve in the discussion.All communicative acts effect the precedlng discourse context and I have shown that by tracking these effects the grammar can specify a frame of reference for subsequent discussion. Then, a structure-~applng approach tO analogies was discussed. In this approach it is claimed that the focus of an analogy is on system~ of relatlonships between objects, rather than on attributes of objects. Analysis of naturally occurring analogies supported this claim. I then showed that the context space theory's communicative goal analysis of discourse enabled the theory to go beyond the struoture-mapplng approach by providing a further specification of waich klnds of relationships are most likely to be Included in description of an analogy.• Lastly, Z presented a number of excerpts taken from naturally ongoing discourse and showed how the context space analysis provided a cogent explanation for the types Of analogies found in dlsoouree, the types Of reJemt£ons given tO them, the rule-like thematic development of a dlsoourse after an a~alogy, and the surface llngulstlc forms used in these development.In conclusion, analyzing speakers spontaneous generation of analogies and other conversants' reaotlons to them, provides ua an usually direct form by which access individuals' implicit criteria for analogies.These exchanges reveal what conversants believe analogies are responsible for and thereby what i~ormatlon they need to convey.
Appendix:
| null | null | null | null | {
"paperhash": [
"lawler|metaphors_we_live_by",
"gentner|are_scientific_analogies_metaphors",
"winston|learning_and_reasoning_by_analogy",
"gentner|the_structure_of_analogical_models_in_science.",
"carbonell|metaphor_-_a_key_to_extensible_semantic_analysis",
"cohen|elements_of_a_plan-based_theory_of_speech_acts",
"ortony|beyond_literal_similarity",
"toulmin|the_uses_of_argument",
"halliday|options_and_functions_in_the_english_clause",
"austin|how_to_do_things_with_words"
],
"title": [
"Metaphors We Live by",
"Are Scientific Analogies Metaphors",
"Learning and reasoning by analogy",
"The Structure of Analogical Models in Science.",
"Metaphor - A Key to Extensible Semantic Analysis",
"Elements of a Plan-Based Theory of Speech Acts",
"Beyond Literal Similarity",
"The uses of argument",
"Options and functions in the English clause",
"How to do things with words"
],
"abstract": [
"Every linguist dreams of the day when the intricate variety of human language will be a commonplace, widely understood in our own and other cultures; when we can unlock the secrets of human thought and communication; when people will stop asking us how many languages we speak. This day has not yet arrived; but the present book brings it somewhat closer. It is, to begin with, a very attractive book. The publishers deserve a vote of thanks for the care that is apparent in the physical layout, typography, binding, and especially the price. Such dedication to scholarly publication at prices which scholars can afford is meritorious indeed. We may hope that the commercial success of the book will stimulate them and others to similar efforts. It is also a very enjoyable and intellectually stimulating book which raises, and occasionally answers, a number of important linguistic questions. It is written in a direct and accessible style; while it introduces and uses a number of new terms, for the most part it is free of jargon. This is no doubt part of its appeal to nonlinguists, though linguists should also find it useful and provocative. It even has possibilities as a textbook. Lakoff and Johnson state their aims and claims forthrightly at the outset (p. 3):",
"Abstract : The goal of this paper is to provide a structural characterization of analogy in science, contrasting good science analogies with literary metaphors and with poorer examples of science analogies. The paper first presents a theoretical approach in which complex metaphors and analogies are treated as structure-mappings between domains. Within this framework, metaphor and analogy are contrasted with literal similarity. Then, a set of distinguishing structural characteristics is proposed and applied in a series of comparisons. To illustrate the points, analogies of historical importance are analyzed. (Author)",
"We use analogy when we say something is a Cinderella story and when we learn about resistors by thinking about water pipes. We also use analogy when we learn subjects like economics, medicine, and law. This paper presents a theory of analogy and describes an implemented system that embodies the theory. The specific competence to be understood is that of using analogies to do certain kinds of learning and reasoning. Learning takes place when analogy is used to generate a constraint description in one domain, given a constraint description in another, as when we learn Ohm's law by way of knowledge about water pipes. Reasoning takes place when analogy is used to answer questions about one situation, given another situation that is supposed to be a precedent, as when we answer questions about Hamlet by way of knowledge about Macbeth.",
"Abstract : Analogical models can be powerful aids to reasoning, as when light is explained in terms of water waves; or they can be misleading, as when chemical processes are thought of in terms of life processes such as putrefaction. This paper proposes a structural characterization of good science analogy using a theoretical approach in which complex metaphors and analogies are treated as structure-mappings between domains. To delineate good from poor science analogy, a series of comparisons is made. First, metaphor and analogy are contrasted with literal similarity; then, explanatory-predictive analogy is contrasted with expressive metaphor; finally, within science, good explanatory analogy is contrasted with poor explanatory analogy. Analogies of historical importance are analyzed and empirical findings are discussed. (Author)",
"Interpreting metaphors is an integral and inescapable process in human understanding of natural language. This paper discusses a method of analyzing metaphors based on the existence of a small number of generalized metaphor mappings. Each generalized metaphor contains a recognition network, a basic mapping, additional transfer mappings, and an implicit intention component. It is argued that the method reduces metaphor interpretation from a reconstruction to a recognition task. Implications towards automating certain aspects of language learning are also discussed.",
"This paper explores the truism that people think about what they say. It proposes hat, to satisfy their own goals, people often plan their speech acts to affect their listeners' beliefs, goals, and emotional states. Such language use can be modelled by viewing speech acts as operators in a planning system, thus allowing both physical and speech acts to be integrated into plans. \n \nMethodological issues of how speech acts should be defined in a plan-based theory are illustrated by defining operators for requesting and informing. Plans containing those operators are presented and comparisons are drawn with Searle's formulation. The operators are shown to be inadequate since they cannot be composed to form questions (requests to inform) and multiparty requests (requests to request). By refining the operator definitions and by identifying some of the side effects of requesting, compositional adequacy is achieved. The solution leads to a metatheoretical principle for modelling speech acts as planning operators.",
"Hitherto, theories of similarity have restricted themselves to judgments of what might be called literal similarity. A central thesis of this article is that a complete account of similarity needs also to be sensitive to nonliteralness, or metaphoricity, an aspect of similarity statements that is most evident in similes but that actually underlies metaphorical language in general. Theoretical arguments are advanced in support of the claim that metaphoricity can be represented in terms of the relative degrees of salience of matching (or matchable) attributes of the two terms in a comparison. A modification of Tversky's account of similarity is proposed. The implications of this proposal for similarity statements are discussed, along with implications for the psychological processes involved in their comprehension. It is argued that the general account of similarity proposed, including, as it does, nonliteral similarity, can form not only the basis of a theory of metaphor but can also give a credible account of the relationship between metaphor, analogy, and similarity.",
"Preface Introduction 1. Fields of argument and modals 2. Probability 3. The layout of arguments 4. Working logic and idealised logic 5. The origins of epistemological theory Conclusion References Index.",
"In his excellent summary of the work of the Prague school, Josef Vachek draws attention to the development by Czechoslovak linguists of the 'functionalist' view of linguistic structure. One of the characteristics of this approach has been the recognition of several components in the organization of the grammar of a language, a conception which Vachek shows to be derivable from the work of Mathesius and of Biihler. The importance of this conception appears very clearly from a study of the systems and structures of the English clause. The systems having the clause as their point of origin group themselves into three sets which I have referred to elsewhere under the headings of transitivity, mood and theme. These labels refer specifically to sets of clause systems, which are however relatable to these general components of the grammar. Those of transitivity belong to that area which Vachek derives from Biihler's 'Darstellungsfunktion' and glosses as 'informing of the factual, objective content of extralinguistic reality'; Danes calls it the 'semantic structure' of the sentence. Those of mood express speech function, the relations among the participants in a speech situation and the speech roles assigned by the speaker to himself and his interlocutors; this includes most of Poldauf's 'third syntactical plan', and embraces both of Biihler's additional functions—the speaker's attitude and his attempt to influence the hearer—though excluding (as outside grammar) the paralinguistic indexical signals. Theme is the clausal part of Mathesius' 'functional sentence analysis', Danes' 'organization of utterance', which continues to be studied extensively by Firbas and others; this concerns the structuring of the act of communication within the total framework of a discourse, the delimitation of message units and the distribution of information within them. Thus the English clause embodies options of three kinds, experiential, interpersonal and intratextual, specifying relations among (respectively) elements of the speaker's experience, participants defined by roles in the speech situation, and parts of the discourse. Although the clause options do not exhaust the expression of these semantic relations—other syntactic resources are available, quite apart from the selection of lexical items—the clause provides the domain for many of the principal options associated with these three components. At the same time it is useful to recognize a fourth component, the logical, concerned with the and's and or's and if's of language; this is often subsumed under the first of those above (e.g. by Danes; cf. n. 4 above) with some general label such as 'cognitive', but it is represented by a specific set of structural resources (hence not figuring among the clause options) and should perhaps rather be considered separately. Let us then suggest four such generalized components in the organization of the grammar of a language, and refer",
"* Lecture I * Lecture II * Lecture III * Lecture IV * Lecture V * Lecture VI * Lecture VII * Lecture VIII * Lecture IX * Lecture X * Lecture XI * Lecture XII"
],
"authors": [
{
"name": [
"J. Lawler",
"G. Lakoff",
"Mark Johnson"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"D. Gentner"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"P. Winston"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"D. Gentner"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"J. Carbonell"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"Philip R. Cohen",
"C. Raymond Perrault"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"A. Ortony"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"S. Toulmin"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"M. Halliday"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"J. Austin"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
}
]
}
],
"arxiv_id": [
null,
null,
null,
null,
null,
null,
null,
null,
null,
null
],
"s2_corpus_id": [
"1898149",
"60483078",
"9814700",
"61081758",
"12282464",
"2166355",
"19216434",
"120694372",
"141097032",
"170896069"
],
"intents": [
[],
[],
[],
[],
[],
[],
[],
[],
[],
[]
],
"isInfluential": [
false,
false,
false,
false,
false,
false,
false,
false,
false,
false
]
} | - Problem: The paper aims to analyze analogies in natural conversations to gain insight into people's implicit evaluation procedures for analogies. It focuses on the formalization of the effects of analogy on discourse development and the hierarchical structuring of discourse into distinct but related context spaces.
- Solution: The hypothesis of the paper is that by examining analogies in naturally ongoing discourse using the context space theory, it is possible to elucidate the rule-like behavior governing thematic development after an analogy is presented and to explain the surface linguistic forms used in discourse. | 524 | 0.009542 | null | null | null | null | null | null | null | null |
b92a93910f9710eded58996e5937dee11fb16a22 | 18645311 | null | Language Production: the Source of the Dictionary | Ultimately in any natural language production system the largest amount of human effort will go into the construction of the dictionary: the data base that associates objects and relations in the program's domain with the words and phrases that could be used to describe them. This paper describes a | {
"name": [
"McDonald, David D."
],
"affiliation": [
null
]
} | null | null | 19th Annual Meeting of the Association for Computational Linguistics | 1981-06-01 | 9 | 13 | null | null | null | null | possible the automatic description of individual objects according to their position in the semantic net. Furthermore, because the process of deciding what properties to use in an object's description is now given over to a common procedure, we can write general-purpose rules to, for example, avoid redundancy or grammatically awkward constructionS.Regardless of its design, every system for natural !anguage production begins by selecting objects and relations from the speaker's internal model of the world, and proceeds by choosing an English phrase to describe each selected item, combining them according to the properties of the phrases and the constraints of the language's grammar and rhetoric. TO do this, the system must have a data base of some sort, in which the objects it will talk about are somewhow associated with the appropriate word or phrase (or with procedures that will construct them). 1 will refer to such a data base as a dictionary.Evcry production system has a dictionary in one form or another, and its compilation is probably the single most tedious job that the human designer must perform. In the past. typically every object and relation has been given its own individual "lex" property with the literal phrase to be used; no attempt was made to share criteria or sub-phrases between properties; and there was a tacit a~umtion that the phrase would have the right form and content in any of the contexts that the object will be mentioned. (For a review of this literature, see r~a .) However, dictionaries built in this way become increasingly harder to maintain as programs become larger and their discourse more sophisticated. We would like instead some way to de the extention of the dictionary direcdy to the extention of the program's knowledge base; then, as the knowledge base expands the dictionary will expand with it with only a minimum of additional cffort. This paper describes a technique for adapting a semantic abstraction hierarchy of thc sort providcd by ~d~-ONE ~:1.] to function directly as a dictionary for my production system MUMIII.I~ [,q'~.. Its goal is largely expositional in the sense that while the technique is fully spocificd and proto-types have been run, many implementation questions remain to be explored and it is thus premature to prescnt it as a polished system for others to use; instead, this paper is intended as a presentation of the issues--potcntial economicw---that the technique is addressing. In particular, given the intimate relationship between the choice of architecture in the network formalism used and the ability uf the dictionary to incorporate linguistically useful generalizations and utilities, this presentation may suggest additional criteria for networ k design, namely to make it easier to talk about the objects the network The basic idea of "piggybacking" the dictionary onto the speaker's regular semantic net can be illustrated very simply: Consider the KL.ONE network in figure one, a fragment taken from a conceptual taxonomy for augmented transition nets (given in [klune]). The dictionary will provide the means to describe individual concepts (filled ellipses) on the basis of their links to generic concepts lempty ellipses) and their functional roles (squar~s), as shown there for the individual concept "C205". The default English description of C205 (i.e. "the jump arc fi'om S/NP to S/DCL") is created recursiveiy from dL.~riptions of the three network relations that C205 participates in: its "supercuneept" link to the concept "jump-are". and its two role-value relations: "source-stateIC205)=S/NP" and "nextstate(C205)=S/t:~Ct.". Intuitively. we want to associate each of the network objects with an English phrase: the concept "art'" with the word "art"', the "source-state" role relation with the phrase "C205 comes from S/NF" (note the embedded references), and so on. The machinery that actually brings about this ~sociation is, of course, much more elaborate, involving three different recta-level networks describing the whole of the original, "domain" network, as well as an explicit representation of the English grammar (i.e. it Ls itsclf expressed in rd,-oN~). among them at run dine). When we want to describe an object, we follow out its recta-link inzo the dictionary network and then realize the word or phrase that we find. annotates the supen:oncept chain front "jump-an:" to "object"; comparable dictionary networks can be built [.or hierarchies of roles or other hierarchical network structures. Noticc how the use of an inheritance m~hanisrn within the dictionary network (denoted by the vcrticat [inks betwccn roles) allows us on the one hand to state the determiner decision (show, bern only as a cloud) once and for all at thc level of the domain conccpt "object", while at the same time we can vo:umulate or supplant lexk:al material as we move down to more specific levels in the domain nctwork.After all the inhent*n~c is factored in. dt¢ entry for. e.g., the generic concept "lump-ate" will de~:. There is much more to be said about how the "embedded entries" can be controlled, how, for example, the planner can arrange to say either "C205 goes to S/DCL" or "There is a jump arc going to S/DCL" by dynamically specializing the description of the clause, however it would be taking us too far afield: the interested reader is referred to [thesisl. The point to be made here is just that the writer of the dictionary has an option either to write specific dictionary entries for domain relations, or to leave them to general "macro entries" that will build them out of the entries for the objects involved as just sketched. Using the macro entries of course meau that less effort v, ill be needed over all, but using specific entries permits one to rake advantage of special idioms or variable phrases that are either not productive enough or not easy enough to pick out in a standard recta-level description of the domain network to be worth writing macro entries for. A simple example would be a special entry for when one plans to describe an arc in terms of both its source and its nexi states: in this case there is a nice compaction available by using die verb "connect" in a single clause (instead of one clause for each role). Since the ~I,-O~F. formalism has no transparent means of optionally bundling two roles into one, this compound rcladon has to be given its own dictionary entry by hand.Up to this point, we have been looking at associations between "organic" objects or relations in the domain network and their dictionary entries for production. It is often the case however, that the speech planner Exauszivc details of these operations may be found in ["1~ .The mechanisms of the dictionary per se perform two ~ncdons: (l) the association of the "ground level" linguistic phrases with the objeets of the domain network, and 2 purposes that the only phrase listed in dictionary for the next-state relation is the one from the first example, Le.Now. "say-about"s goal is a sentence that has S/DCL as its subje=.It can tell from the dictionary's annotauon and its English grammar that the phrase as it stands will not permit this since the verb "go to" does not passiviz¢; however, the phrase is amenable to a kind of deffiog transformation that would yield the text: "S/DCL L~ where C205 goe~ to'."Say-about" arraogcs for this consu'uccion by building the structure below as its representation ofi~ decision, passing it on to .~R:),mu.: for realizatiou.Note ~at this structure :'-.,.,.,.,.,.,.,.,.,~sentially a linguistic constituent structure of the .sual sort, describing the (annotated) surtace sU-ucture of dze intended text co the depth that "say-abouC' has planned it, The ~nctional labels marking the constituent positions (i.e. "subject", "verb", ccc.) control the options for the realization of the domain-network objects they initially con=in. (The objects will be subscquendy replaced by the phrases that reafizc thcm. processing from leR to righc) Thus the first instance of S/I)CI_ in the subject position, is realized without contextual effects as the name ".V/DCL": while the second instance, acting as the reladve pronoun fur the cleft, is realized as the interrogative pronoun "where": and the final instance, embedded within the "next-state" relation, is suprcsscd entirely even though the rest of the relation is expre.~cd normally. These cnutextoal variations are all entirely transparent to the dictionary mechanisms and demonstrate how we can increa~ the utility of the phrases by carefully annotating them in the dictionary and using general purpose operations chat are ~ggered by the descriptions of the phrases alone, therefore not needing to know anything about their semant~ content.This example was of contextual effects that applied aRer the domain objects had been embedded in a linguistic structure, l.inguis~c context can have its effect eadier as well by monitoring the aecumuladon p~occ~ and appiyiog its effects at that level. Considering how the phrase for the jump are C2.05 would be fonned in this same example. Since the planner's original insmaction (i.e. "(say-abm,t_ )" did not mention C205 spccifcally, the description of that ubjec~ will be IeR to the default precis discussed earlier. In the original example, C205 was dc~ribed in issoladon, her= it L~ part of an ongoing dJscou~e context which muse be allowed ru influence the proton.The default description employed all three of the domain-network relations that C205 is involved in. In this discourse context, however, one of those relations, "neat-smte(c2OS)=SIDCL". has already be given in the text: were we to include it in this realization of C'205. the result would be garishly redundant and quite unnatural, i.e. "3/DCL ~ where the jump arc from S/NP Io S/DCL goes to". To rule out this realization, we can filterttm original set of three relations, eliminating the redundant relation bemuse we know that it is already mentioned in the CCXL Doing this en~ils (1) having some way to recognize when a relauon is already given in the text. and (2) a predictable point in the preec~ when the filtering can be done. rha second is smaight fo~arcL the "describe-as" fimetion is the interface between the planner and the re',dization components; we simply add a cheek in t~t function to scan through the list of relation-entries to bc combined and arrange for given relations to be filtered ouc.As fi)r the definition of "given". MUMBLE maintains a multi-purpose record of the cunmnt discourse context which, like the dictionary, is a recta- Consider what was done earlier by the "say-about' function: there the planner proposed to say Something about one object by saying a relation in which the object was involved, the text choosen for the relation being specially transformed to insure that its thematic subject was the object in question, in these situations, the planner decides to use the relatinos it does without any particular regard for their potential linguistic structure. This means that there is a certain potential for linguistic disaster. Suppose we wanted to use our earlier trio of relations about C205 as the basis of a question about S/DCI,; that is, suppose our planner is a program that is building up an augmented transition net in response to a description fed to it by its human user and that it has reached a point where it knows that there is a sub-network of the ATN that begins with the state S/DCI. but itdoes not yet know how that sub-network is reached. (This would be as if the network of figure one had the "unknown-state" in place of S/NP.) Such a planner would be motivated to ask its user:(what <state> is-.~Jeh-thnt next-state(C20S)=<state>)Realizing this question will mean coming up with a description of C205. that name being one made up by the planner rather than the user. It can of course be described in terms of its properties as already shown; however, if dais description were done without appreciating that it oecured in the middle of a question, it would be possible to produce the nonsense sentence:" where does the jump arc from lead to S/DCL?'Here the embedded reference to the "unknown-state" (part of the relation, "source-state(C205)=unknown-state") appearcd in the text as a rclative clause qualiF/ing the reference to "the jump arc". Buc because "unknown- A grammatical dictionary filter like this one for island-constraintS could also be use for the maintaince of discourse focus or for stylistic heuristics such as wheth(:r to omit a reducable verb. In general, any decision criteria that is common to all of the dictionary entries should be amenable to being abstracted out into a mechanism such as this at which point they can act transparendy to the planner and thereby gain an important modularity of linguistic and conceptual/pragmatic criteria. "['he potential problems with this technique involve questions of how much information the planner can rcasenably be expected to supply the linguistic componenL The above filter would be impossible, for example, if the macro-entry where it is applied were not able to notice that the embedded description of C205 could mention the "unknown-state" before it committed itself to ),he overall structure of the question. The sort of indexing required to do this does not seem unreasonable to me as long as the indexes are passed up with the ground dictionary entries to the macroentries. Exactly how to do this is one of the pending questions of implementation.The dictionaries of other production systems in the literature have typically been either trivial. ~,nconditionai object to word mappi.gs Cf3,, orelse been encoded in uncxtcndable procedures CZ. | null | Main paper:
:
possible the automatic description of individual objects according to their position in the semantic net. Furthermore, because the process of deciding what properties to use in an object's description is now given over to a common procedure, we can write general-purpose rules to, for example, avoid redundancy or grammatically awkward constructionS.Regardless of its design, every system for natural !anguage production begins by selecting objects and relations from the speaker's internal model of the world, and proceeds by choosing an English phrase to describe each selected item, combining them according to the properties of the phrases and the constraints of the language's grammar and rhetoric. TO do this, the system must have a data base of some sort, in which the objects it will talk about are somewhow associated with the appropriate word or phrase (or with procedures that will construct them). 1 will refer to such a data base as a dictionary.Evcry production system has a dictionary in one form or another, and its compilation is probably the single most tedious job that the human designer must perform. In the past. typically every object and relation has been given its own individual "lex" property with the literal phrase to be used; no attempt was made to share criteria or sub-phrases between properties; and there was a tacit a~umtion that the phrase would have the right form and content in any of the contexts that the object will be mentioned. (For a review of this literature, see r~a .) However, dictionaries built in this way become increasingly harder to maintain as programs become larger and their discourse more sophisticated. We would like instead some way to de the extention of the dictionary direcdy to the extention of the program's knowledge base; then, as the knowledge base expands the dictionary will expand with it with only a minimum of additional cffort. This paper describes a technique for adapting a semantic abstraction hierarchy of thc sort providcd by ~d~-ONE ~:1.] to function directly as a dictionary for my production system MUMIII.I~ [,q'~.. Its goal is largely expositional in the sense that while the technique is fully spocificd and proto-types have been run, many implementation questions remain to be explored and it is thus premature to prescnt it as a polished system for others to use; instead, this paper is intended as a presentation of the issues--potcntial economicw---that the technique is addressing. In particular, given the intimate relationship between the choice of architecture in the network formalism used and the ability uf the dictionary to incorporate linguistically useful generalizations and utilities, this presentation may suggest additional criteria for networ k design, namely to make it easier to talk about the objects the network The basic idea of "piggybacking" the dictionary onto the speaker's regular semantic net can be illustrated very simply: Consider the KL.ONE network in figure one, a fragment taken from a conceptual taxonomy for augmented transition nets (given in [klune]). The dictionary will provide the means to describe individual concepts (filled ellipses) on the basis of their links to generic concepts lempty ellipses) and their functional roles (squar~s), as shown there for the individual concept "C205". The default English description of C205 (i.e. "the jump arc fi'om S/NP to S/DCL") is created recursiveiy from dL.~riptions of the three network relations that C205 participates in: its "supercuneept" link to the concept "jump-are". and its two role-value relations: "source-stateIC205)=S/NP" and "nextstate(C205)=S/t:~Ct.". Intuitively. we want to associate each of the network objects with an English phrase: the concept "art'" with the word "art"', the "source-state" role relation with the phrase "C205 comes from S/NF" (note the embedded references), and so on. The machinery that actually brings about this ~sociation is, of course, much more elaborate, involving three different recta-level networks describing the whole of the original, "domain" network, as well as an explicit representation of the English grammar (i.e. it Ls itsclf expressed in rd,-oN~). among them at run dine). When we want to describe an object, we follow out its recta-link inzo the dictionary network and then realize the word or phrase that we find. annotates the supen:oncept chain front "jump-an:" to "object"; comparable dictionary networks can be built [.or hierarchies of roles or other hierarchical network structures. Noticc how the use of an inheritance m~hanisrn within the dictionary network (denoted by the vcrticat [inks betwccn roles) allows us on the one hand to state the determiner decision (show, bern only as a cloud) once and for all at thc level of the domain conccpt "object", while at the same time we can vo:umulate or supplant lexk:al material as we move down to more specific levels in the domain nctwork.After all the inhent*n~c is factored in. dt¢ entry for. e.g., the generic concept "lump-ate" will de~:. There is much more to be said about how the "embedded entries" can be controlled, how, for example, the planner can arrange to say either "C205 goes to S/DCL" or "There is a jump arc going to S/DCL" by dynamically specializing the description of the clause, however it would be taking us too far afield: the interested reader is referred to [thesisl. The point to be made here is just that the writer of the dictionary has an option either to write specific dictionary entries for domain relations, or to leave them to general "macro entries" that will build them out of the entries for the objects involved as just sketched. Using the macro entries of course meau that less effort v, ill be needed over all, but using specific entries permits one to rake advantage of special idioms or variable phrases that are either not productive enough or not easy enough to pick out in a standard recta-level description of the domain network to be worth writing macro entries for. A simple example would be a special entry for when one plans to describe an arc in terms of both its source and its nexi states: in this case there is a nice compaction available by using die verb "connect" in a single clause (instead of one clause for each role). Since the ~I,-O~F. formalism has no transparent means of optionally bundling two roles into one, this compound rcladon has to be given its own dictionary entry by hand.Up to this point, we have been looking at associations between "organic" objects or relations in the domain network and their dictionary entries for production. It is often the case however, that the speech planner Exauszivc details of these operations may be found in ["1~ .The mechanisms of the dictionary per se perform two ~ncdons: (l) the association of the "ground level" linguistic phrases with the objeets of the domain network, and 2 purposes that the only phrase listed in dictionary for the next-state relation is the one from the first example, Le.Now. "say-about"s goal is a sentence that has S/DCL as its subje=.It can tell from the dictionary's annotauon and its English grammar that the phrase as it stands will not permit this since the verb "go to" does not passiviz¢; however, the phrase is amenable to a kind of deffiog transformation that would yield the text: "S/DCL L~ where C205 goe~ to'."Say-about" arraogcs for this consu'uccion by building the structure below as its representation ofi~ decision, passing it on to .~R:),mu.: for realizatiou.Note ~at this structure :'-.,.,.,.,.,.,.,.,.,~sentially a linguistic constituent structure of the .sual sort, describing the (annotated) surtace sU-ucture of dze intended text co the depth that "say-abouC' has planned it, The ~nctional labels marking the constituent positions (i.e. "subject", "verb", ccc.) control the options for the realization of the domain-network objects they initially con=in. (The objects will be subscquendy replaced by the phrases that reafizc thcm. processing from leR to righc) Thus the first instance of S/I)CI_ in the subject position, is realized without contextual effects as the name ".V/DCL": while the second instance, acting as the reladve pronoun fur the cleft, is realized as the interrogative pronoun "where": and the final instance, embedded within the "next-state" relation, is suprcsscd entirely even though the rest of the relation is expre.~cd normally. These cnutextoal variations are all entirely transparent to the dictionary mechanisms and demonstrate how we can increa~ the utility of the phrases by carefully annotating them in the dictionary and using general purpose operations chat are ~ggered by the descriptions of the phrases alone, therefore not needing to know anything about their semant~ content.This example was of contextual effects that applied aRer the domain objects had been embedded in a linguistic structure, l.inguis~c context can have its effect eadier as well by monitoring the aecumuladon p~occ~ and appiyiog its effects at that level. Considering how the phrase for the jump are C2.05 would be fonned in this same example. Since the planner's original insmaction (i.e. "(say-abm,t_ )" did not mention C205 spccifcally, the description of that ubjec~ will be IeR to the default precis discussed earlier. In the original example, C205 was dc~ribed in issoladon, her= it L~ part of an ongoing dJscou~e context which muse be allowed ru influence the proton.The default description employed all three of the domain-network relations that C205 is involved in. In this discourse context, however, one of those relations, "neat-smte(c2OS)=SIDCL". has already be given in the text: were we to include it in this realization of C'205. the result would be garishly redundant and quite unnatural, i.e. "3/DCL ~ where the jump arc from S/NP Io S/DCL goes to". To rule out this realization, we can filterttm original set of three relations, eliminating the redundant relation bemuse we know that it is already mentioned in the CCXL Doing this en~ils (1) having some way to recognize when a relauon is already given in the text. and (2) a predictable point in the preec~ when the filtering can be done. rha second is smaight fo~arcL the "describe-as" fimetion is the interface between the planner and the re',dization components; we simply add a cheek in t~t function to scan through the list of relation-entries to bc combined and arrange for given relations to be filtered ouc.As fi)r the definition of "given". MUMBLE maintains a multi-purpose record of the cunmnt discourse context which, like the dictionary, is a recta- Consider what was done earlier by the "say-about' function: there the planner proposed to say Something about one object by saying a relation in which the object was involved, the text choosen for the relation being specially transformed to insure that its thematic subject was the object in question, in these situations, the planner decides to use the relatinos it does without any particular regard for their potential linguistic structure. This means that there is a certain potential for linguistic disaster. Suppose we wanted to use our earlier trio of relations about C205 as the basis of a question about S/DCI,; that is, suppose our planner is a program that is building up an augmented transition net in response to a description fed to it by its human user and that it has reached a point where it knows that there is a sub-network of the ATN that begins with the state S/DCI. but itdoes not yet know how that sub-network is reached. (This would be as if the network of figure one had the "unknown-state" in place of S/NP.) Such a planner would be motivated to ask its user:(what <state> is-.~Jeh-thnt next-state(C20S)=<state>)Realizing this question will mean coming up with a description of C205. that name being one made up by the planner rather than the user. It can of course be described in terms of its properties as already shown; however, if dais description were done without appreciating that it oecured in the middle of a question, it would be possible to produce the nonsense sentence:" where does the jump arc from lead to S/DCL?'Here the embedded reference to the "unknown-state" (part of the relation, "source-state(C205)=unknown-state") appearcd in the text as a rclative clause qualiF/ing the reference to "the jump arc". Buc because "unknown- A grammatical dictionary filter like this one for island-constraintS could also be use for the maintaince of discourse focus or for stylistic heuristics such as wheth(:r to omit a reducable verb. In general, any decision criteria that is common to all of the dictionary entries should be amenable to being abstracted out into a mechanism such as this at which point they can act transparendy to the planner and thereby gain an important modularity of linguistic and conceptual/pragmatic criteria. "['he potential problems with this technique involve questions of how much information the planner can rcasenably be expected to supply the linguistic componenL The above filter would be impossible, for example, if the macro-entry where it is applied were not able to notice that the embedded description of C205 could mention the "unknown-state" before it committed itself to ),he overall structure of the question. The sort of indexing required to do this does not seem unreasonable to me as long as the indexes are passed up with the ground dictionary entries to the macroentries. Exactly how to do this is one of the pending questions of implementation.The dictionaries of other production systems in the literature have typically been either trivial. ~,nconditionai object to word mappi.gs Cf3,, orelse been encoded in uncxtcndable procedures CZ.
Appendix:
| null | null | null | null | {
"paperhash": [
"woods|research_in_natural_language_understanding",
"mcdonald|natural_language_production_as_a_process_of_decision-making_under_constraints",
"goldman|computer_generation_of_natural_language_from_a_deep_conceptual_base",
"ross|constraints_on_variables_in_syntax"
],
"title": [
"Research in Natural Language Understanding",
"Natural language production as a process of decision-making under constraints",
"Computer generation of natural language from a deep conceptual base",
"Constraints on variables in syntax"
],
"abstract": [
"Abstract : The goals of the project are to develop techniques required for fluent and effective communication between a decision maker and an intelligent computerized display system in the context of complex decision tasks such as military command and control. This problem is approached as a natural language understanding problem, since most of the techniques required would still be necessary for an artificial language designed specifically for the task. Characteristics that are considered important for such communication are the ability for the user to omit details that can be inferred by the system and to express requests in a form that 'comes naturally' without extensive forethought or problem solving. These characteristics lead to the necessity for a language structure that mirrors the user's conceptual model of the task and the equivalents of anaphoric reference, ellipsis, and context-dependent interpretation of requests. these in turn lead to requirements for handling large data bases of general world knowledge to support the necessary inferences. The project is seeking to develop techniques for representing and using real world knowledge in this context, and for combining it efficiently with syntactic and semantic knowledge. This report discusses aspects of research to date and a general approach to definite anaphoric reference and near-deterministic parsing strategies.",
"1,102,701. Locating conductors. TATEISI ELECTRONICS CO. June 16, 1965 [June 24, 1964], No. 25467/65. Heading G1N. To compensate for the effect of supply voltage fluctuations on an electromagnetic detector, the output amplifier has a D. C. reference voltage varying with the mains supply. As shown the sensing head comprises a primary 2 and opposed secondaries 3, 4 (for detecting a conductor 9). The output is applied through an amplifier 14 to a common emitter trigger comprising transistors 17, 18 to operate a switching circuit 23. The switching circuit and transistors are energized from constant voltage supplies, but the emitter \"reference\" bias is derived from the current through a resistor 28 in series with a Zener diode 29 and hence varies with the A. C. supply to the sensing head.",
"Abstract : For many tasks involving communication between humans and computers it is necessary for the machine to produce as well as understand natural language. The authors describes an implemented system which generates English sentences from Conceptual Dependency networks, which are unambiguous, language- free representations of meaning. The system is designed to be task independent and thus capable of providing the language generation mechanism for such diverse problem areas as question answering, machine translation, and interviewing.",
"Massachusetts Institute of Technology. Dept. of Modern Languages and Linguistics. Thesis. 1967. Ph.D."
],
"authors": [
{
"name": [
"W. Woods",
"R. Brachman"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"David D. McDonald"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"N. Goldman"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"J. Ross"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
}
]
}
],
"arxiv_id": [
null,
null,
null,
null
],
"s2_corpus_id": [
"61138592",
"45464479",
"56767024",
"60624374"
],
"intents": [
[],
[],
[
"background"
],
[]
],
"isInfluential": [
false,
false,
false,
false
]
} | null | 524 | 0.024809 | null | null | null | null | null | null | null | null |
57ccdfb05c18d5662c9a237c253921b9b6319a0b | 11399996 | null | A Rule-based Conversation Participant | The problem of modeling human understanding and generation of a coherent dialog is investigated by simulating a conversation participant. The rule-based system currently under development attempts to capture the intuitive concept of "topic" using data structures consisting of declarative representations of the subjects under discussion linked to the utterances and rules that generated them. Scripts, goal trees, and a semantic network are brought to bear by general, domain-independent conversational rules to understand and generate coherent topic transitions and specific output utterances. | {
"name": [
"Frederking, Robert E."
],
"affiliation": [
null
]
} | null | null | 19th Annual Meeting of the Association for Computational Linguistics | 1981-06-01 | 10 | 10 | null | 1. Rules, topics, and utterances Numerous systems have been proposed to model human use of language in conversation (speech acts [l] , MICS[3] , Grosz [5] ). They have attacked the problem from several different directions. Often an attempt has been made to develop some intersentential analog of syntax, despite the severe problems that grammar-oriented parsers have experienced. The program described in this paper avoids the use of such a grammar, using instead a model of the conversation's topics to provide the necessary connections between utterances. It is similar to the ELI parsing system, developed by Riesbeck and Schank [7] , in that it uses relatively small, independent segments of code (or "rules") to decide how to respond to each utterance, given the context of the utterances that have already occurred. The program currently operates in the role of a graduate student discussing qualifier exams, although the rules and control structures are independent of the domain, and do not assume any a priori topic of discussion.The main goals of this project are:• To develop a small number of general rules that manipulate internal models of topics in order to produce a coherent conversation.• To develop a 'representation for these models of topics which will enable the rules to generate responses, control the flow of conversation, and maintain a history of the system's actions during the current conversation.The rule-based approach was chosen because it appears to work in a better and more natural way than syntactic pattern matching in the domain of single utterances, even though a grammatical structure can be clearly demonstrated there. If it is awkward to use a grammar for single-sentence analysis, why expect it to work in the larger domain of human discourse,, where there is no obviously demonstrable "syntactic" structure? in place of grammar productions, rules are used which can initiate and close topics, and form utterances based on the input, current topics, and long-term knowledge. This set of rules does not include any domainspecific inferences; instead, these are placed into the semantic network when the situations in which they apply are discussed.It is important to realize that a "topic" in the sense used in this paper is not the same thing as the concept of "focus" used in the anaphora and coreference disambiguation literature. There, the idea is to decide which part of a sentence is being focused on (the "topic" of the sentence), so that the system can determine which phrase will be referred to by any future anaphoric references (such as pronouns). In this paper, a topic is a concept, possibly encompassing more than the sentence itself, which is "brought to mind" when a person hears an utterance (the "topic" of a conversation). It is used to decide which utterances can be generated in response to the input utterance, something that the focus of a sentence (by itself) can not in general do. The topics need to be stored (as opposed to possibly generating them when needed) simply because a topic raised by an input utterance might not be addressed until a more interesting topic has been discussed.The data structure used to represent a topic is simply an object whose value is a Conceptual Dependency (or CD) [8] description of the topic, with pointers to rules, utterances, and other topics which are causally or temporally related to it, plus an indication of what conversational goal of the program this topic is intended to fulfill. The types of relations represented include: the rule (and any utterances involved) that resulted in the generation of the topic, any utterances generated from the topic, the topics generated before and after this one (if any), and the rule (and utterances) that resulted in the closing of this topic (if it has been closed).Utterances have a similar representation: a CD expression with pointers to the rules, topics, and other utterances to which they are related. This interconnected set of CD expressions is referred to as the topic-utterance graph, a small example of which (without CDs) is illustrated in Figure 1 . Since language was originally only spoken, and used primarily as an immediate communication device, it is not unreasonable to assume that the mental machinery we wish to model is designed primarily for use in an interactive fashion, such as in dialogue. Thus, it is more natural to model one interacting participant than to try to model an external observer's understanding of the whole interaction. | null | null | One of the nice properties of rule-based systems is that they tend to have simple control structures. In the conversation participant," the rule application routine is simply an initialization followed by a loop in which a CD expression is input, rules are tried until one produces a reply-wait signal, and the output CD is printed. A special token is output tO indicate that the conversation is over, causing an exit from the loop. One can view this part of the model as an input/output interface, connecting the data structures that the rules access with the outside world.Control decisions outside of the rules themselves are handled by the agenda structure and the interest-rating routine. An agenda is essentially a list of lists, with each of the sublists referred to as a "bucket". Each bucket holds the names of one or more rules. The actual firing of rules is not as simple as indicated in the above paragraph, in that all of the rules in a bucket are tested, and allowed to fire if their test clauses are true. After all the rules in a bucket have been tested, if any of them have produced a reply-wait signal, the "best" utterance is chosen for output by the interest-rating routine, and the main loop described above continues. If none have indicated a need to wait, the next bucket is then tried. Thus, the rules in the first bucket are always tried and have highest priority.Priority decreases on a bucket.by.bucket basis down to the last bucket. In a normal agenda, the act of firing is the same as what I am calling the reply-wait signal, but in this system there is an additional twist. It is necessary to have a way to produce two sentences in a row, not necessarily tightly related to each other (such as an interjection followed by a Question). Rather than trying to guarantee that all such sets of rules are in single buckets, the rules have been given the ability to fire, produce an utterance, cause it to be output immediately, and not have the agenda stopped, simply by indicating that a reply-wait is not needed. It is also possible for a rule to fire without producing either an utterance or a reply-wait, as is the case for rules that simply create topics, or to produce a list of utterances, which the interest-rater must then look through.The interest-rating routine determines which of the utterances produced by the rules in a bucket (and not immediately output) is the best, and so should be output. This is done by comparing the proposed utterance to our model of the goals of the speaker, the listener, and the person being discussed. Currently only the goals of the person being discussed are examined, but this will be extended to include the goals of the other two. The comparison involves looking through our model of his goal tree, giving an utterance a higher ranking for matching a more important goal. This is adjusted by a small amount to favor utterances which imply reaching a goal and to disfavor those which imply failing to reach it. Goal trees are stored in long-term memory (see next section).There are three main kinds of memory in this model: working memory, long.term memory, and rule memory. The data structures representing working memory include several global variables plus the topic-utterance graph. The topicutterance graph has the general form of two doubly-linked lists, one consisting of all utterances input and output (in chronological order) and the other containing the topics (in the order they were generated), with various pointers indicating the relationships between individual topics and utterances. These were detailed in section 1.Long-term memory is represented as a semantic network [2] . Input utterances which are accepted as true, as well as their immediate inferences, are stored here. The typical semantic network concept has been extended somewhat to include two types of information not usually found there: goal trees and scripts.Goal trees [6, 3] are stored under individual tokens or classes (on the property GOALS) by name. They consist of several CD concepts linked together by SUBGOAL/SUPERGOAL links, with the top SUPERGOAL being the most important goal, and with importance decreasing with distance below the top of the goal tree. Goal trees represent the program's model of a person or organization's goals. Unlike an earlier conversation program [3], in this system they can be changed during the course of a conversation as the program gathers new information about the entities it already knows something about. For example, if the program knows that graduate students want to pass a particular test, and that Frank is a graduate student, and it hears that Frank passed the test, it will create an individual goal tree for Frank, and remove thegoal of passing that test. This is clone by the routine which stores CDs in the semantic network, whenever a goal is mentioned as the second clause of an inference rule that is being stored. If the rule is stored as true, the first clause of the implication is made a subgoal of the mentioned goal in the actor's goal tree. If the rule is negated, any subgoal matching the first clause is removed from the goal tree.As for scripts [9] , these are the model's episodic memory and are stored as tokens in the semantic network, under the class SCRIPT. Each one represents a detailed knowledge of some sequence of events (and states), and can contain instances of other scripts as events. The individual events are represented in CD, and are generally descriptions of steps in a commonly occuring routine, such as going to a restaurant or taking a train trip. In the current context, the main script deals with the various aspects of a graduate student taking a qualifier. There are parameters to a script, called "roles" • in this case, the student, the writers of the exam, the graders, etc. Each role has some required preconditions. For example, any writer must be a professor at this university. There are also postconditions, such as the fact that if the student passes the qual he/she has fulfilled that requirement for the Ph.D. and will be pleased. This post-condition is an example of a domain-dependent inference rule, which is stored in the semantic network when a situation from the domain is discussed.Finally, we have the rule memory. This is just the group of data objects whose names appear in the agenda. Unlike the other data objects, however, rules contain Lisp code, stored in two parts: the TEST and the ACTION. The TEST code is executed whenever the rule is being tried, and determines whether it fires or not. It is thus an indication of when this rule is applicable. (The conditions under which a rule is tried were given in the section on Control, section 2.1). The ACTION code is executed when the rule fires, and returns either a list of utterances (with an implied reply-wait), an utterance with an indication that no reply wait is necessary, or NIL, the standard Lisp symbol for "nothing". The rules can have side effects, such as creating a possible topic and then returning NIL. Although rules are connected into the topic-utterance graph, they are not really considered part of it, since they are a permanent part of the system, and contain Lisp code rather than CO expressions.A sample of what the present version of the system can do will now be examined. It is written in MacLisp, with utterances input and output in CO. This assumes the existence of programs to map English to CO and CD to English, both of which have been previously done to a degree. The agenda currently contains six rules. The two in the highest priority bucket stop the conversation if the other person says "goodbye" or leaves (Rule3-3 and Rule3-4). They are there to test the control of the system, and will have to be made more sophisticated (i.e., they should try to keep up the conversation if important active topics remain).The three rules in the next bucket are the heart of the system at its current level of development. The first two raise topics to request missing information. The first (Rule1) asks about missing pre-conditions for a script instance, such as when someone who is not known to be a student takes a qualifier. The second (Rule2) asks about incompletely specified postconditions, such as.the actual project that someone must do if they get a remedial. At this university, a remedial is a conditional pass, where the student must complete a project in the same area as the qual in order to complete this degree recluirement; there are four quals in the curriculum. The third rule in this bucket (Rule4) generates questions from topics that are open requests for information, and is illustrated in Figure 3 Test: Are there any topics which are requests for information which have not been answered?Action: Retrieve the hypothetical part, form all "necessary" questions, and offer them as utterances.The last bucket in the agenda simply has a rule which says "1 don't understand" in response to things that none of the previous rules generated a response to (RuleK). This serves as a safety net for the control structure, so it does not have to worry about what to do if no response is generated.Now let us look at how the program handles an actual conversation fragment. The program always begins by asking "What's new?", to which (this time) it gets the reply, "Frank got a remedial on his hardware qual." The CO form for this is shown in Figure 3 -2 (the program currently assumes that the person it is talking to is a student it knows named John). The CD version is an instance of the qual script, with Frank, hardware, and a remedial being the taker, area, and result, respectively.((< = > ($QUAL &AREA (=HARDWARE*) &TAKER ('FRANK') &RESULT ('REMEDIAL')))) (ISA ('UTTERANCE*) PERSON "JOHN" PRED UTrS)Figure 3-2." First input utteranceWhen the rules examine this, five topics are raised, one due to the pre-condition that he has not passed the qual before (by Rule1), and four due to various partially specified postconditions (by Rule2):• If Frank was confident, he will be unhappy.• If he was not confident, he will be content.• He has to do a project. We don't know what.• If he has completed his project, he might be able to graduate.The system only asks about things it does not know. In this case, it knows that Frank is a student, so it does not ask aJoout that. As an example, the topic that asks whether he is content is illustrated in Figure 3 -3.((CON ((< = > ($QUAL &AREA ('HARDWARE') &TAKER ('FRANK') &RESULT ('REMEDIAL')))) LEADTO ((CON ((ACTOR ('FRANK') IS ('CONFIDENCE" VAL (> 0))) MOP ('NEG" "HYPO')) LEADTO ((ACTOR ('FRANK') IS ('HAPPINESS" VAL (0))))) MOP ('HYPO')))) (INITIATED (U0013) SUCC T0009 CPURPOSE REQINFO INITIATEDBY (RULE2 U0002) ISA ('TOPIC') PRED T0004)Along with raising these topics, the rules store the utterance and script post-inferences in the semantic network, under all the nodes mentioned in them. The following have been stored under Frank by this point:• Frank got a remedial on his hardware qual.• If he was confident, he'll be unhappy.• If he was not confident, he'll be content.• Passing the hardware clual will not contribute to his graduating.• He has a hardware project to do.• Finishing his hardware project will contribute to his graduating.While these were being stored, Frank's goal tree was altered. This occurred because two of the post-inferences are themselves inference rules that affect whether he will graduate, and graduating is already assumed to be a goal of any student. Thus when the first is stored, a new goal tree is created for Frank (since his interests were represented before by the Student goal tree), and the goal of passing the hardware clual is removed. When 'the second is stored, the goal of finishing the project is added below that of graduating on Frank's tree. These goal trees are illustrated in Figures 3-4 and 3-5. At this point, six utterances are generated by Rule4. They are given in Figure 3 -6. Three are generated from the first topic, one iS generated from each of the next three topics, and none is generated from the last topic. The interest rating routine now compares these utterances to Frank's goals, and picks the most interesting one. Because of the new goal tree, the last three utterances match none of Frank's goals, and receive zero ratings. The first one matches his third goal in a neutral way, and receives a rating of 56 (an utterance receives 64 points for the top goal, minus 4 for each level below top, plus or minus one for positive/negative implications. These numbers are, of course, arbitrary, as long as ratings from different goals do not overlap). The second one matches his top goal in a neutral way, and receives 64.Finally, the third one matches his top goal in a negative way, and receives 63. Therefore, the second cluestion gets uttered, and ends uP with the links shown in Figure 3 ('HARDWARE'))) MOD ('?" "NEG')) Hadn't he taken it before? ((< = > ($QUAL &TAKER ('FRANK') &AREA (" HARDWARE ") &RESULT ( • CANCELLED'))) MOO ('?')) Had it been cancelled on him before? ((< = > ($QUAL &TAKER ('FRANK') &AREA ('HARDWARE') &RESULT ('FAILED'))) MOD ('?°)) Had he failed it before? Figu re 3-7: System's response to first utterance oriented systems [5] operate in the context of some fixed task which both speakers are trying to accomplish. Because of this, they can infer the topics that are likely to be discussed from the semantic structure of the task. For example, a task. oriented system talking about qualifiers would use the knowledge of how to be a student in order tO talk about those things relevant to passing qualifiers (simulating a very studious student). It would not usually ask a question like "Is Frank content?.", because that does not matter from a practical point of view.Speech acts based systems (such as [1]) try to reason about the plans that the actors in the conversation are trying to execute, viewing each utterance as an operator on the environment. Consequently, they are concerned mostly about what people mean when they use indirect speech acts (such as using "It's cold in here" to say "Close the window") and are not as concerned about trying to say interesting things as this system is. Another way to took at the two kinds of systems is that speech acts systems reason about the actors' plans and assume fixed goals, whereas this system reasons primarily about their goals.As for related work, ELI (the language analyzer mentioned in section 1) and this system (when fully developed) could theoretically be merged into a single conversation system, with some rules working on mapping English into CD, and others using the CD to decide what responses to generate. In fact, there are situations in which one needs to make use of both kinds of information (such as when a phrase signals a topic shift: "On the other hand..."). One of the possible directions for future work is the incorporation and integration of a rule-based parser into the system, along with some form of rule-based English generation. Another related system, MICS [3], had research goals and a set of knowledge sources somewhat .similar to this system's, but it differed primarily in that it could not alter its goal trees during a conversation, nor did it have explicit data structures for representing topics (the selection of topics was built into the interpreter).The main results of this research so far have been the topicutterance graph and dynamic goal trees. Although some way of holding the intersentential information was obviously needed, no precise form was postulated initially. The current structure was invented after working with an earlier set of rules to discover the most useful form the topics could take. Similarly, the idea that a changing view of someone else's goals should be used to control the course of the conversation arose during work on producing the interestrating routine. The current system is, of course, by no means a complete model of human discourse. More rules need to be developed, and the current ones need to be refined.In addition to implementing more rules and incorporating a parser, possible areas for future work include replacing the interest-rater with a second agenda (containing interestdetermining rules), changing scripts and testing whether the 8"7 rules are truly independent of the subject matter, trying to make the system work with several scripts at once (as SAM [4] does), and improving the semantic network to handle the well-known problems which may arise.[1][2][3][4][5][6][7][8][9] | null | Main paper:
control:
One of the nice properties of rule-based systems is that they tend to have simple control structures. In the conversation participant," the rule application routine is simply an initialization followed by a loop in which a CD expression is input, rules are tried until one produces a reply-wait signal, and the output CD is printed. A special token is output tO indicate that the conversation is over, causing an exit from the loop. One can view this part of the model as an input/output interface, connecting the data structures that the rules access with the outside world.Control decisions outside of the rules themselves are handled by the agenda structure and the interest-rating routine. An agenda is essentially a list of lists, with each of the sublists referred to as a "bucket". Each bucket holds the names of one or more rules. The actual firing of rules is not as simple as indicated in the above paragraph, in that all of the rules in a bucket are tested, and allowed to fire if their test clauses are true. After all the rules in a bucket have been tested, if any of them have produced a reply-wait signal, the "best" utterance is chosen for output by the interest-rating routine, and the main loop described above continues. If none have indicated a need to wait, the next bucket is then tried. Thus, the rules in the first bucket are always tried and have highest priority.Priority decreases on a bucket.by.bucket basis down to the last bucket. In a normal agenda, the act of firing is the same as what I am calling the reply-wait signal, but in this system there is an additional twist. It is necessary to have a way to produce two sentences in a row, not necessarily tightly related to each other (such as an interjection followed by a Question). Rather than trying to guarantee that all such sets of rules are in single buckets, the rules have been given the ability to fire, produce an utterance, cause it to be output immediately, and not have the agenda stopped, simply by indicating that a reply-wait is not needed. It is also possible for a rule to fire without producing either an utterance or a reply-wait, as is the case for rules that simply create topics, or to produce a list of utterances, which the interest-rater must then look through.The interest-rating routine determines which of the utterances produced by the rules in a bucket (and not immediately output) is the best, and so should be output. This is done by comparing the proposed utterance to our model of the goals of the speaker, the listener, and the person being discussed. Currently only the goals of the person being discussed are examined, but this will be extended to include the goals of the other two. The comparison involves looking through our model of his goal tree, giving an utterance a higher ranking for matching a more important goal. This is adjusted by a small amount to favor utterances which imply reaching a goal and to disfavor those which imply failing to reach it. Goal trees are stored in long-term memory (see next section).There are three main kinds of memory in this model: working memory, long.term memory, and rule memory. The data structures representing working memory include several global variables plus the topic-utterance graph. The topicutterance graph has the general form of two doubly-linked lists, one consisting of all utterances input and output (in chronological order) and the other containing the topics (in the order they were generated), with various pointers indicating the relationships between individual topics and utterances. These were detailed in section 1.Long-term memory is represented as a semantic network [2] . Input utterances which are accepted as true, as well as their immediate inferences, are stored here. The typical semantic network concept has been extended somewhat to include two types of information not usually found there: goal trees and scripts.Goal trees [6, 3] are stored under individual tokens or classes (on the property GOALS) by name. They consist of several CD concepts linked together by SUBGOAL/SUPERGOAL links, with the top SUPERGOAL being the most important goal, and with importance decreasing with distance below the top of the goal tree. Goal trees represent the program's model of a person or organization's goals. Unlike an earlier conversation program [3], in this system they can be changed during the course of a conversation as the program gathers new information about the entities it already knows something about. For example, if the program knows that graduate students want to pass a particular test, and that Frank is a graduate student, and it hears that Frank passed the test, it will create an individual goal tree for Frank, and remove thegoal of passing that test. This is clone by the routine which stores CDs in the semantic network, whenever a goal is mentioned as the second clause of an inference rule that is being stored. If the rule is stored as true, the first clause of the implication is made a subgoal of the mentioned goal in the actor's goal tree. If the rule is negated, any subgoal matching the first clause is removed from the goal tree.As for scripts [9] , these are the model's episodic memory and are stored as tokens in the semantic network, under the class SCRIPT. Each one represents a detailed knowledge of some sequence of events (and states), and can contain instances of other scripts as events. The individual events are represented in CD, and are generally descriptions of steps in a commonly occuring routine, such as going to a restaurant or taking a train trip. In the current context, the main script deals with the various aspects of a graduate student taking a qualifier. There are parameters to a script, called "roles" • in this case, the student, the writers of the exam, the graders, etc. Each role has some required preconditions. For example, any writer must be a professor at this university. There are also postconditions, such as the fact that if the student passes the qual he/she has fulfilled that requirement for the Ph.D. and will be pleased. This post-condition is an example of a domain-dependent inference rule, which is stored in the semantic network when a situation from the domain is discussed.Finally, we have the rule memory. This is just the group of data objects whose names appear in the agenda. Unlike the other data objects, however, rules contain Lisp code, stored in two parts: the TEST and the ACTION. The TEST code is executed whenever the rule is being tried, and determines whether it fires or not. It is thus an indication of when this rule is applicable. (The conditions under which a rule is tried were given in the section on Control, section 2.1). The ACTION code is executed when the rule fires, and returns either a list of utterances (with an implied reply-wait), an utterance with an indication that no reply wait is necessary, or NIL, the standard Lisp symbol for "nothing". The rules can have side effects, such as creating a possible topic and then returning NIL. Although rules are connected into the topic-utterance graph, they are not really considered part of it, since they are a permanent part of the system, and contain Lisp code rather than CO expressions.
an example explained:
A sample of what the present version of the system can do will now be examined. It is written in MacLisp, with utterances input and output in CO. This assumes the existence of programs to map English to CO and CD to English, both of which have been previously done to a degree. The agenda currently contains six rules. The two in the highest priority bucket stop the conversation if the other person says "goodbye" or leaves (Rule3-3 and Rule3-4). They are there to test the control of the system, and will have to be made more sophisticated (i.e., they should try to keep up the conversation if important active topics remain).The three rules in the next bucket are the heart of the system at its current level of development. The first two raise topics to request missing information. The first (Rule1) asks about missing pre-conditions for a script instance, such as when someone who is not known to be a student takes a qualifier. The second (Rule2) asks about incompletely specified postconditions, such as.the actual project that someone must do if they get a remedial. At this university, a remedial is a conditional pass, where the student must complete a project in the same area as the qual in order to complete this degree recluirement; there are four quals in the curriculum. The third rule in this bucket (Rule4) generates questions from topics that are open requests for information, and is illustrated in Figure 3 Test: Are there any topics which are requests for information which have not been answered?Action: Retrieve the hypothetical part, form all "necessary" questions, and offer them as utterances.The last bucket in the agenda simply has a rule which says "1 don't understand" in response to things that none of the previous rules generated a response to (RuleK). This serves as a safety net for the control structure, so it does not have to worry about what to do if no response is generated.Now let us look at how the program handles an actual conversation fragment. The program always begins by asking "What's new?", to which (this time) it gets the reply, "Frank got a remedial on his hardware qual." The CO form for this is shown in Figure 3 -2 (the program currently assumes that the person it is talking to is a student it knows named John). The CD version is an instance of the qual script, with Frank, hardware, and a remedial being the taker, area, and result, respectively.((< = > ($QUAL &AREA (=HARDWARE*) &TAKER ('FRANK') &RESULT ('REMEDIAL')))) (ISA ('UTTERANCE*) PERSON "JOHN" PRED UTrS)Figure 3-2." First input utteranceWhen the rules examine this, five topics are raised, one due to the pre-condition that he has not passed the qual before (by Rule1), and four due to various partially specified postconditions (by Rule2):• If Frank was confident, he will be unhappy.• If he was not confident, he will be content.• He has to do a project. We don't know what.• If he has completed his project, he might be able to graduate.The system only asks about things it does not know. In this case, it knows that Frank is a student, so it does not ask aJoout that. As an example, the topic that asks whether he is content is illustrated in Figure 3 -3.((CON ((< = > ($QUAL &AREA ('HARDWARE') &TAKER ('FRANK') &RESULT ('REMEDIAL')))) LEADTO ((CON ((ACTOR ('FRANK') IS ('CONFIDENCE" VAL (> 0))) MOP ('NEG" "HYPO')) LEADTO ((ACTOR ('FRANK') IS ('HAPPINESS" VAL (0))))) MOP ('HYPO')))) (INITIATED (U0013) SUCC T0009 CPURPOSE REQINFO INITIATEDBY (RULE2 U0002) ISA ('TOPIC') PRED T0004)Along with raising these topics, the rules store the utterance and script post-inferences in the semantic network, under all the nodes mentioned in them. The following have been stored under Frank by this point:• Frank got a remedial on his hardware qual.• If he was confident, he'll be unhappy.• If he was not confident, he'll be content.• Passing the hardware clual will not contribute to his graduating.• He has a hardware project to do.• Finishing his hardware project will contribute to his graduating.While these were being stored, Frank's goal tree was altered. This occurred because two of the post-inferences are themselves inference rules that affect whether he will graduate, and graduating is already assumed to be a goal of any student. Thus when the first is stored, a new goal tree is created for Frank (since his interests were represented before by the Student goal tree), and the goal of passing the hardware clual is removed. When 'the second is stored, the goal of finishing the project is added below that of graduating on Frank's tree. These goal trees are illustrated in Figures 3-4 and 3-5. At this point, six utterances are generated by Rule4. They are given in Figure 3 -6. Three are generated from the first topic, one iS generated from each of the next three topics, and none is generated from the last topic. The interest rating routine now compares these utterances to Frank's goals, and picks the most interesting one. Because of the new goal tree, the last three utterances match none of Frank's goals, and receive zero ratings. The first one matches his third goal in a neutral way, and receives a rating of 56 (an utterance receives 64 points for the top goal, minus 4 for each level below top, plus or minus one for positive/negative implications. These numbers are, of course, arbitrary, as long as ratings from different goals do not overlap). The second one matches his top goal in a neutral way, and receives 64.Finally, the third one matches his top goal in a negative way, and receives 63. Therefore, the second cluestion gets uttered, and ends uP with the links shown in Figure 3 ('HARDWARE'))) MOD ('?" "NEG')) Hadn't he taken it before? ((< = > ($QUAL &TAKER ('FRANK') &AREA (" HARDWARE ") &RESULT ( • CANCELLED'))) MOO ('?')) Had it been cancelled on him before? ((< = > ($QUAL &TAKER ('FRANK') &AREA ('HARDWARE') &RESULT ('FAILED'))) MOD ('?°)) Had he failed it before? Figu re 3-7: System's response to first utterance oriented systems [5] operate in the context of some fixed task which both speakers are trying to accomplish. Because of this, they can infer the topics that are likely to be discussed from the semantic structure of the task. For example, a task. oriented system talking about qualifiers would use the knowledge of how to be a student in order tO talk about those things relevant to passing qualifiers (simulating a very studious student). It would not usually ask a question like "Is Frank content?.", because that does not matter from a practical point of view.Speech acts based systems (such as [1]) try to reason about the plans that the actors in the conversation are trying to execute, viewing each utterance as an operator on the environment. Consequently, they are concerned mostly about what people mean when they use indirect speech acts (such as using "It's cold in here" to say "Close the window") and are not as concerned about trying to say interesting things as this system is. Another way to took at the two kinds of systems is that speech acts systems reason about the actors' plans and assume fixed goals, whereas this system reasons primarily about their goals.As for related work, ELI (the language analyzer mentioned in section 1) and this system (when fully developed) could theoretically be merged into a single conversation system, with some rules working on mapping English into CD, and others using the CD to decide what responses to generate. In fact, there are situations in which one needs to make use of both kinds of information (such as when a phrase signals a topic shift: "On the other hand..."). One of the possible directions for future work is the incorporation and integration of a rule-based parser into the system, along with some form of rule-based English generation. Another related system, MICS [3], had research goals and a set of knowledge sources somewhat .similar to this system's, but it differed primarily in that it could not alter its goal trees during a conversation, nor did it have explicit data structures for representing topics (the selection of topics was built into the interpreter).The main results of this research so far have been the topicutterance graph and dynamic goal trees. Although some way of holding the intersentential information was obviously needed, no precise form was postulated initially. The current structure was invented after working with an earlier set of rules to discover the most useful form the topics could take. Similarly, the idea that a changing view of someone else's goals should be used to control the course of the conversation arose during work on producing the interestrating routine. The current system is, of course, by no means a complete model of human discourse. More rules need to be developed, and the current ones need to be refined.In addition to implementing more rules and incorporating a parser, possible areas for future work include replacing the interest-rater with a second agenda (containing interestdetermining rules), changing scripts and testing whether the 8"7 rules are truly independent of the subject matter, trying to make the system work with several scripts at once (as SAM [4] does), and improving the semantic network to handle the well-known problems which may arise.[1][2][3][4][5][6][7][8][9]
:
1. Rules, topics, and utterances Numerous systems have been proposed to model human use of language in conversation (speech acts [l] , MICS[3] , Grosz [5] ). They have attacked the problem from several different directions. Often an attempt has been made to develop some intersentential analog of syntax, despite the severe problems that grammar-oriented parsers have experienced. The program described in this paper avoids the use of such a grammar, using instead a model of the conversation's topics to provide the necessary connections between utterances. It is similar to the ELI parsing system, developed by Riesbeck and Schank [7] , in that it uses relatively small, independent segments of code (or "rules") to decide how to respond to each utterance, given the context of the utterances that have already occurred. The program currently operates in the role of a graduate student discussing qualifier exams, although the rules and control structures are independent of the domain, and do not assume any a priori topic of discussion.The main goals of this project are:• To develop a small number of general rules that manipulate internal models of topics in order to produce a coherent conversation.• To develop a 'representation for these models of topics which will enable the rules to generate responses, control the flow of conversation, and maintain a history of the system's actions during the current conversation.The rule-based approach was chosen because it appears to work in a better and more natural way than syntactic pattern matching in the domain of single utterances, even though a grammatical structure can be clearly demonstrated there. If it is awkward to use a grammar for single-sentence analysis, why expect it to work in the larger domain of human discourse,, where there is no obviously demonstrable "syntactic" structure? in place of grammar productions, rules are used which can initiate and close topics, and form utterances based on the input, current topics, and long-term knowledge. This set of rules does not include any domainspecific inferences; instead, these are placed into the semantic network when the situations in which they apply are discussed.It is important to realize that a "topic" in the sense used in this paper is not the same thing as the concept of "focus" used in the anaphora and coreference disambiguation literature. There, the idea is to decide which part of a sentence is being focused on (the "topic" of the sentence), so that the system can determine which phrase will be referred to by any future anaphoric references (such as pronouns). In this paper, a topic is a concept, possibly encompassing more than the sentence itself, which is "brought to mind" when a person hears an utterance (the "topic" of a conversation). It is used to decide which utterances can be generated in response to the input utterance, something that the focus of a sentence (by itself) can not in general do. The topics need to be stored (as opposed to possibly generating them when needed) simply because a topic raised by an input utterance might not be addressed until a more interesting topic has been discussed.The data structure used to represent a topic is simply an object whose value is a Conceptual Dependency (or CD) [8] description of the topic, with pointers to rules, utterances, and other topics which are causally or temporally related to it, plus an indication of what conversational goal of the program this topic is intended to fulfill. The types of relations represented include: the rule (and any utterances involved) that resulted in the generation of the topic, any utterances generated from the topic, the topics generated before and after this one (if any), and the rule (and utterances) that resulted in the closing of this topic (if it has been closed).Utterances have a similar representation: a CD expression with pointers to the rules, topics, and other utterances to which they are related. This interconnected set of CD expressions is referred to as the topic-utterance graph, a small example of which (without CDs) is illustrated in Figure 1 . Since language was originally only spoken, and used primarily as an immediate communication device, it is not unreasonable to assume that the mental machinery we wish to model is designed primarily for use in an interactive fashion, such as in dialogue. Thus, it is more natural to model one interacting participant than to try to model an external observer's understanding of the whole interaction.
Appendix:
| null | null | null | null | {
"paperhash": [
"riesbeck|comprehension_by_computer_:_expectation-based_analysis_of_sentences_in_context",
"green|who_technical_report",
"carbonell|subjective_understanding,_computer_models_of_belief_systems",
"findler|associative_networks-_representation_and_use_of_knowledge_by_computers",
"cullingford|script_application:_computer_understanding_of_newspaper_stories.",
"grosz|the_representation_and_use_of_focus_in_dialogue_understanding."
],
"title": [
"Comprehension by computer : expectation-based analysis of sentences in context",
"WHO Technical Report",
"Subjective understanding, computer models of belief systems",
"Associative Networks- Representation and Use of Knowledge by Computers",
"Script application: computer understanding of newspaper stories.",
"The representation and use of focus in dialogue understanding."
],
"abstract": [
"Abstract : ELI (English Language Interpreter) is a natural language parsing program currently used by several story understanding systems. ELI differs from most other parsers in that it: produces meaning representations (using Schank's Conceptual Dependency system) rather than syntactic structures; uses syntactic information only when the meaning can not be obtained directly; talks to other programs that make high level inferences that tie individual events into coherent episodes; uses context-based exceptions (conceptual and syntactic) to control its parsing routines. Examples of texts that ELI has understood, and details of how it works are given.",
"The Feather River Coordinated Resource Management Group (FR-CRM) has been restoring channel/ meadow/ floodplain systems in the Feather River watershed since 1985. Project and watershed-wide monitoring has shown multiple benefits of this type of work. With the concern over global climate change, the group wanted to measure the carbon sequestered in project areas. No protocol was found to measure carbon stores in native Sierra Nevada meadows. Plumas County funded the FR-CRM to conduct a pilot study to develop such a protocol. The sampling protocol included discrete sampling at consistent soil depths to determine the vertical distribution of carbon. A Technical Advisory Committee developed and refined a multi-project sampling protocol for three restored meadows and three un-restored meadows. Data from the un-restored meadows will also provide base-line data for before and after restoration comparisons. Initial data analysis indicates that restored meadows contain twice as much total carbon as degraded meadows; on average approximately 40 tonnes more carbon per acre. Virtually all of the additional carbon in restored meadows occurs in the soil, and is thus protected from loss via grazing, haying, wildfire, etc. Introduction In 1994 the Feather River Coordinated Resource Management (FR-CRM) group shifted its stream restoration approach from bank stabilization to landscape function. Called meadow re-watering, this approach entails returning the incised stream channel to the remnant channel(s) on the historic floodplain and eliminating the incised channel as a feature in the landscape. Historic channel incision resulted in significant land degradation as the adjacent groundwater levels dropped commensurate with the incising stream bed. Vegetation conversion rapidly follows as deep, densely rooted meadow plant communities convert to xeric shrubs and other plants. After a decade of meadow restoration, the FR-CRM recognized the possibility of a significant change in carbon stocks in these restored meadows and valleys. Plumas County has been a leader in advocating for investment in watershed ecosystem services such as water storage and filtering, and now, carbon sequestration. The county provided funding for the FR-CRM to conduct a pilot study of carbon in biomass and soils. Watershed Location and Characteristics The upper Feather River watershed is located in northeastern California encompassing 3,222 square miles that drains west from east of the Sierra crest into Oroville Reservoir and thence to the Sacramento River. Annual runoff produced from this watershed provides over 1,400 MW of hydroelectric power, and represents a significant component of the California State Water Project, annually providing 2.3 millionacre feet of water for urban, industrial and agricultural consumers downstream. The Feather River watershed is primarily comprised of two distinct geologies: the Sierra Nevada granitic batholith of the western third of the watershed; and Basin and Range fault-block meta-volcanics, metasedimentary and recent basalts in the eastern two-thirds. It is the Basin and Range zone (Diamond Mtns.) of the watershed that has been the primary area of restoration. This geologic mélange of faulted and weathered rock has resulted in over 390 square miles of expansive meadows and valleys comprised of deep fine grained alluvium, shown as green and yellow in Figure 1. Figure 1. Upper Feather River Watershed Upper watershed meadows and valleys (shown as green/yellow in Figure 1), often dozens of miles in length, once supported a rich ecosystem of meadow and riparian habitats, for coldwater-loving trout, a diversity of wildlife, and indigenous peoples during the dry summers of California’s Mediterranean climate. The densely rooted vegetation, cohesive soils and expansive floodplains all contributed to the sustainability of these meso-scale floodplain meadows, with associated alluvial fans. River system segments are often characterized simplistically as transport and depositional reaches. Depositional reaches feature lower gradients and a more expansive fluvial setting. These landscape attributes, in conjunction with the type and quantity of sediment, debris and nutrients, are what provide for the development and evolution of meso-scale “sinks” or “warehouses”, for the hydrologic products of the basin. Viewed as a macro-hyporheic corridor ( Harvey and Wagner, 2000; Boulton, et.al., 1998; Stanford and Ward, 1993) these features are crucial as a landscape zone of active mass and energy transfer as well as an active storage reservoir for water, sediment and nutrients. The long-term recruitment and evolution of these features involve physical, Figure 2. Typical Alluvial Features biological and chemical synthesis within the natural variability of fluvial processes. Euro-American settlement of the watershed began in 1850 with gold mining in the western portions of the watershed and, soon thereafter, agricultural production in meadows to support the mining communities. Dairy farming, horses (for cavalry mounts), sheep and beef cattle were some of the early intensive disturbances that led to localized channel incision. The resultant lowering of shallow groundwater elevations began to alter and weaken the vegetative structure of the system. Soon, near the burgeoning communities in the mid-elevation valleys, a permanent road system was established with frequent channel manipulation and relocation efforts to simplify drainage and minimize bridge construction, again leading to localized incision. In the early 1900’s both an intercontinental, and numerous local, railroad systems were constructed throughout the watershed. The local railroad networks, for the purpose of both mining and logging, were routed through the long low-gradient valleys for ease of construction. These valleys were still relatively wet at that time so elevated grades were constructed using adjacent borrow ditches. By 1940, the severe morphological changes imposed by the railroad grades, in conjunction with the above referenced land use impacts resulted in rapid, severe systemic incision of many upper watershed meadow systems. In the mid 1980’s numerous watershed stakeholders adopted a statutory authority that allowed for Coordinated Resource Management and Planning (CRMP). Twenty-four federal, state and local, public and private entities now form the Feather River Coordinated Resource Management (FRCRM) group to adopt, support and implement a watershed-wide restoration program. FR-CRM Restoration Approach & Background The FRCRM began an ongoing implementation program to address these watershed issues in 1990. Initially, these projects focused on geomorphic restoration techniques (Rosgen, 1996) to stabilize incised stream channels. While overall success was encouraging, the projects illustrated the concept that any restoration work in the incised channels was subject to elevated stresses even in moderate flood events (510 year return interval). Concurrently, the benefits from this approach were localized and limited to reduced erosion, and incremental improvement of aquatic habitats and water quality. Little overall improvement of watershed conditions was being realized (Wilcox, et al 2001). This led to re-evaluating restoration approach to encompass the entire historic fluvially-evolved valley bottom. Called meadow re-watering, this approach entails returning the incised stream channel to the remnant channel(s) on the historic floodplain and eliminating the incised channel as a water conveyance feature in the landscape (Figures 3 & 4 and photos 1a, 1b, 2a & 2b). Simultaneously, the FRCRM had received a project assistance request from the United States Forest Service, Plumas National Forest (PNF) to develop restoration alternatives for Cottonwood Creek in the Big Flat Meadow (Photos 2a & 2b). FRCRM staff, led by Jim Wilcox, began conducting surveys and data collection that included the entire relic meadow from hillslope to hillslope. This data collection effort quickly pointed to the nascent meadow re-watering technology as a likely restoration alternative. Figure 3. Typical cross-section, showing pre-project incision, post-project plug elevation, and the new channel. Photos 1a and 1b below show this same cross-section, however, the entire gully is not shown in the pre-project photo. Photo 1aClarks Creek Pre-project, July, 2001 Photo 1bClarks Creek Post project, July, 2006 The rocks in the background of photos 1a and 1b can be used for reference. Because the new channel is in a different location, the photo point also moved in order to show the channel in the preand postproject conditions. Figure 4. Typical cross-section, showing pre-project incision, post-project plug elevation, and the new channel. Implemented in 1995, this project quickly validated the fundamental soundness of this approach. The one mile long, 47 acre project produced elevated shallow groundwater levels, eliminated gully wall erosion, filtered sediments delivered from the upper watershed, extended and increased summer baseflows, and reversed the xeric vegetation trends resulting in improved terrestrial, avian and aquatic habitats. These benefits persisted despite withstanding a 100-year RI (return interval) flood in 1997. Photo 2aBig Flat Pre-project, Dec.,1993 Photo 2bBig Flat Post project, May, 2006 The success of this initial project led to the implementation of an additional 18 projects utilizing this technology (Table 1.). Varying in scale and watershed characteristics, these projects have restored another 20 miles of channel and 5,000 acres of meadow/floodplain. Carbon Sequestration Qualitatively, these projects appeared to significantly increase organic carbon stocks through the much increased root mass as well as increased surface growth, and, possibly, through the more effective hyporheic exchange throughout the meadow. The purpose of the following protocol is to quantitatively establish the effe",
"Abstract : Modeling human understanding of natural language requires a model of the processes underlying human thought. No two people think exactly alike; different people subscribe to different beliefs and are motivated by different goals in their activities. A theory of subjective understanding has been proposed to account for subjectively-motivated human thinking ranging from ideological belief to human discourse and personality traits. A process-model embodying this theory has been implemented in a computer system, POLITICS. POLITICS models human ideological reasoning in understanding the natural language text of international political events. POLITICS can model either liberal or conservative ideologies. Each ideology produces a different interpretation of the input event. POLITICS demonstrates its understanding by answering questions in natural language question-answer dialogs.",
"Upon opening this book and leafing through the pages, one gets the impression of an important compendium. The fourteen articles provide good coverage of semantic networks and related systems for representing knowledge. Their average length of 33 pages is long enough to give each author reasonable scope, yet short enough to permit a variety of viewpoints to be expressed in a single volume. The editor should be commended for his efforts in putting together a wellorganized book instead of just another collection of unrelated papers.",
"Abstract : The report describes a computer story understander which applies knowledge of the world to comprehend what it reads. The system, called SAM, reads newspaper articles from a variety of domains, then demonstrates its understanding by summarizing or paraphrasing the text, or answering questions about it. (Author)",
"Abstract : This report develops a representation of focus of attention thatcircumscribes discourse contexts within a general representation ofknowledge. Focus of attention is essential to any comprehension processbecause what and how a person understands is strongly influenced bywhere his attention is directed at a given moment. To formalize thenotion of focus, the need for and the use of focus mechanisms areconsidered from the standpoint of building a computer system that canparticipate in a natural language dialogue with a ser, Two ranges offocus, global and immediate, are investigated, and representations forincorporating them in a computer system are developed.The global focus in which an utterance is interpreted is determinedby the total discourse and situational setting of the utterance. Itinfluences what is talked about, how different concepts are introduced,and how concepts are referenced. To encode global focuscomputationally, a representation is developed that highlights thoseitems that are relevant at a given place in a dialogue. The underlyingknowledge representation is segmented into subunits, called focusspaces, that contain those items that are in the focus of attention of adialogue participant during a particular part of the dialogue.Mechanisms are required for updating the focus representation,because, as a dialogue progresses, the objects and actions that arerelevant to the conversation, and therefore in the participants' focusof attention, change. Procedures are described for deciding when andhow to shift focus in task-oriented dialogues, i.e., in dialogues inwhich the participants are cooperating in a shared task. Theseprocedures are guided by a representation of the task being performed.The ability to represent focus of attention in a languageunderstanding system results in a new approach to an important problemin discourse comprehension -- the identification of the referents ofdefinite noun phrases."
],
"authors": [
{
"name": [
"C. Riesbeck",
"R. Schank"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"Morris W. Green"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"J. Carbonell"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"N. Findler"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"R. E. Cullingford"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"B. Grosz"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
}
]
}
],
"arxiv_id": [
null,
null,
null,
null,
null,
null
],
"s2_corpus_id": [
"60546035",
"28467205",
"142895805",
"15616277",
"60708295",
"61114426"
],
"intents": [
[],
[],
[],
[],
[],
[]
],
"isInfluential": [
false,
false,
false,
false,
false,
false
]
} | - Problem: The paper investigates the problem of modeling human understanding and generation of coherent dialogues by simulating a conversation participant.
- Solution: The paper proposes a rule-based system that captures the concept of "topic" using data structures to understand and generate coherent topic transitions and specific output utterances in conversations. | 524 | 0.019084 | null | null | null | null | null | null | null | null |
fc0c0dad8e87f245be655864c1c61336242e3a13 | 17368365 | null | Two Discourse Generators | The task of discourse generation is to produce multisentential text in natural language which (when heard or read) produces effects (informing, motivating, etc.) and impressions (conciseness, correctness, ease of reading, etc.) which are appropriate to a need or goal held by the creator of the text. Because even little children can produce multieententiaJ text, the task of discourse generation appears deceptively easy. It is actually extremely complex, in part because it usually involves many different kinds of knowledge. The skilled writer must know the subiect matter, the beliefs of the reader and his own reasons for writing. He must also know the syntax, semantics, inferential patterns, text structures and words of the | {
"name": [
"Mann, William C."
],
"affiliation": [
null
]
} | null | null | 19th Annual Meeting of the Association for Computational Linguistics | 1981-06-01 | 17 | 7 | null | The task of discourse generation is to produce multisentential text in natural language which (when heard or read) produces effects (informing, motivating, etc.) and impressions (conciseness, correctness, ease of reading, etc.) which are appropriate to a need or goal held by the creator of the text.Because even little children can produce multieententiaJ text, the task of discourse generation appears deceptively easy. It is actually extremely complex, in part because it usually involves many different kinds of knowledge. The skilled writer must know the subiect matter, the beliefs of the reader and his own reasons for writing. He must also know the syntax, semantics, inferential patterns, text structures and words of the language. It would be complex enough if these were all independent bodies of knowledge, independently employed. Unfortunately, they are all interdependent in intricate ways. The use of each must be coordinated with all of the others.For Artificial Intelligence, discourse generation is an unsolved problem.There have been only token efforts to date, and no one has addressed the whole problem. Still, those efforts reveal the nature of the task, what makes it diffic;,It and how the complexities can be controlled.In comparing two AI discourse generators here we can do no more than suggest opportunities and attractive options for future exploration. Hopefully we can convey the benefits of hindsight without too much detailed description of the individual systems. We describe them only in terms of a few of the techniques which they employ, partly because these tschnk:lUes seem more vaJuable than the system designs in which they happen to have been used.The systems which we study here are PROTEUS, by Anthony Davey at Edinburgh [Davey 79] , and KDS by Mann and Moore at ISI [Mann and Moore 801. As we will see, each is severely limited and idiosyncratic in scope and technique. Comparison of their individual skills reveals some technical opportunities.Why do we study these systems rather then others? Both of them represent recent developments, in Davey's case, recently published. Neither of them has the appearance of following a hand-drawn map or some' other humanly-produced sequential presentation. Thus their performance represents capabilities of the programs more than cs4)abilities of the programmer. Also, they are relatively unfamiliar to the AI audience. Perhaps most importantly, they have written some of the best machine-produced discourse of the existing art.Rrst we identify particular techniclues in each system which contribute strongly to the quality of the resulting text. Then we compare the two Systems discussing their common failings and the possibilities for creating a system having the best of both.PROTEUS creates commentary on games of tic.tac-toe (noughts and crosses.) Despite the apparent simplicity of this task, the possibilities of producing text are rich and diverse. (See the example in Appendix .)The commentary is intended both to convey the game (except for insignificant variations of rotation and reflection), and also to convey For example:• Best move VS. Actual move: The move generators are used to compute the "best" move, which is compared to the actual one. If the move generator for the best move has higher rank than any generator proposing the actual move, then the actual move is treated as s mistake, putting the best move and the actual move in contrast..Threat VS. Block: A threat contrasts with an immediately following block. This contrast is a fixedreflex of the system. It seems accedteble to mark any goal pursuit followed by blocking of the goaJ as contrastive.Sentence scope is determined by several heuristic rules including I. Express as many contrasts as possible explicitly. (This leeds to immediate selection of words such as "but" and "however".)2. Limit sentences to ,3 clauses.3. Put as many clauses in a sentence as possible.4. Expmas only the worst of several mistakes.The main clause struotum is built before entering the grammar, Both the move characterization process and the use of contrasts as the principal ~ of sentence scope contribute a great deal to the quality of the resuRing text. However, Davey's central concern was not with these two 9rocessos but with the third one, sentence generation. 3. Unity. Since the grammar is defined in a single pubilcalion with a single 8uthomhiD, the is*ups of compatibility Of parts are minimized,It is intemsUng that Oevey does not employ the Systemk: Grlmm~lr dehvstJon rules at the highest level Although the grammer is defined in terms of the generation of sentences, Devoy entem it at the clause level with 8 sents~cs desc~Dtlon whi¢;h conforms to Systemic Grammar but was built by other means. A sentence st this level is temporal principally of Ctl-_,,~__. but the surface conjunotlens have already been chosen.Although Oavey real(as no claim, this may redrasent a gener~d result about text generation systems. Above some level of al:atnm~on in the text planning proces~ planning is not conditioned by the content of the grammar.The obvious place to exbeot planning tO become indegendertt of the grammar is at the sentence I~.But in both PROTEUS and KD.~ Operations independent of the grammar extend down to the level of independent clm within sentences. Top leve~ coniunctlons am not within such ci~,~__; so they are determined by Dlenning pr~ before the grammar is enter~l.It would be extremely awkward to implement Oavey'$ sentence s¢obe heuristics in a syetamic grammar. The formalism is not well suited for oDer~tion~ such as maximizing the total number of explicit contrastive (dements. However, the problem is not just a i~rololem with the formalism; grammars generally do not deal with this sort of operations, and so are ~oorly equil~ped to do so. them i~ no need to "rw~nm" it. G~mo~ ~ dlvid~d imo ~ Id~ ~ &¢~Iv~.mle-ll3t~lca~nL A sylmlm of choi¢.e~ (surJ1 u t~e ch~¢~ i~lt~men -d~ -ind "~mm,e" kT,~,-,~ de.minim *,vh~J~ cm "ai~-ttve') is mech~ ~ other cboP~a ~d w is ¢ondi~m¢. but cny ch~¢e. ~¢e mechU, ".. ,.m,~m~rair,~L . Ru~ S~lUenc~ ~ femunl-emL ejcn ~ tl~ "We~l." ~ml¢~ enet~le ieed¢~ m,l~¢ltu~ enl n~ tm'efenemmnL Although the computer scientist who tries to learn from [Oavey 79} will find that it presents difficulties, the underlying system is interesting enough to be worth the trouble. Devey's imDiementation generally allam~s to be orthodox, conforming to [Hudson 71 ]. Davey regularizes some of the rules toward type uniformity, and thus reduces the apparent correspondence to Hudson's formulabons. However, the linguistic babe does not appear to have been compromised by the implementation.One of the major strengths of the work is that it takes advantage of s comprehenal~, explicit and linguistically justified grammar.Text quality is also enhanced by some simple filtering (of what will be expressed) based on demmdencies between known facts. Some facts dominate otherJ in the choice of what tO Say. If them is only one move on the board having a certain significance, say "threat", then the move is described by its significance alone, e.g. "you threatened me" without location informatic, n, since the reader can infer the locations. Similarly, only the most significant defensive and offensive aspects of a move ate described even though all are known.The resulting text is divn) and of good quality. Although them ere awlo~mrdn __es,~__~ the immense advantage conferred by using a sophisticated grammar prevails.SOace precJudes a thocou0h description of KDS, but fuller deecriptione are mml~ie [Mann and Moore 80] , [Mann 79] , [Moore 7% KDS consists Of five me~r modules, as indicated in Figure 2 . A Frl~lmentM is re~oonalble for eXtnL~ing the relevant knowledge from the notation given to it and dividing that knowledge into small exl:nmalble units, which we call fragments or pmtosentance¢ A Prod=~m Solver, a goal-Oumuit engine in the AI tradition, is responsible for seeotlng the I~eUntmlm~d style of me text and ~ for iml~l~ng the grol8 ol~glmlze~Ion onto the text accordlng to m8~ style. A Knowk~ge Rater removes protasentencas that need not be expressed because they would be redundant to the medsr. The I~est and moat interesting r~__,_,~e is the Hill Climber, which has three raspon~billtis¢ tO compose complex i:rotoasntences from simpM one~ tO judge relative quality among the units resulting from compo~dtton, and to repeatedly improve the set of protosentencas on the Ioasm of those judgments so thM it is of the highest eyeful quality. Finally. s very simple Surface Sentence Maker cremes the sentences of me final text out of protoaec~lmc~.The data flow of these modules can be thought of as a simple pipeline, each module processing the relevant knowledge in turn.The principal contributors to the quality of the output text are:1. The Fragment and Compose Paradigm: The information which will be expressed is first broken down into an unorganized collection of subsententiai (¢oproximstely clause-level) propositional fragments. Each fragment is crested by methods which guarantee that it is expressible by a sentence (usually a very short one, This makes it possible to organize the remainder of the processing so that the text production problen~ is treated as an improvement problem rather than as a search for feasible solutions, a significant advantage.) The fragments are then organized and combined in the remaining processing.2. Aggregation Rules: Clause-combining patterns of English are represented in a distinct set of rules. The rules specify transactions on the set of propositional fragments and previous aggregation results. In each transection several fragments are extracted and an aggregate structure (capable of representation as a sentence) is inserted. A representative rule, named "Common Cause," shows how to combine the facts for "Whenever C then X" and "Whenever C then Y" into "Whenever C then X and Y" at s propositional level.3. Preference Assessment: Every propositional fragment or aggregate is scored using a set of scoring rules. The score represents s measure of sentence quality. The knowledge domain of KDS' largest example is a Fire Crisis domain, the knowledge of what happens when there is a fire in a computer room. The task was to cause the reader, a computer operator, to know what to do in all contingencies of fire.The most striking impression in comparing the two systems is that they have very little in common. In particular,1. KDS has sentence scoring and a quslity.based selection of I~ow to say things; PROTEUS has no counterp;u't.2. PROTEUS has a sophisticated grammar for which KOS has only a rudimentary counterpart, 3. PROTEUS has only a dynamic, redundancy-based P, nowledge filtering, whereas the filtering in KOS removes principally St=~tic, foreknown information.4. KDS has clause-combining rules which make little use of conjunctions, whereas PROTEUS has no such rules but makes elaborate use of coniunctions.5. KOS selects for brevity above all, whereas PROTEUS selects for contrast =hove all.6. PROTEUS takes great advantage of fact significance assessment, which KDS does not use.They have little in common technically, yet both produce high quality text relative to predecessors. This raises an obvious question--Could the techniques of the two systems be combined in an even more effective system?There is one prominent exception to this general lack of shared functions and characteristics, Recent text synthesis systems [Davey 79] , [Mann end Moore 80] , [Weiner 80] , [Swartout 77 ], [Swartoutthesis 81], all include a facility for keeping certain facts or ideas from being expressed. There is an implicit or explicit model of the reader's knowledge. Any knowledge which is somehow seen as obvious to the reader is suppressed.All of the implemented facilities of this sort are rudimentary; many consist only of manually-ornduced lists or marks. However, it is clear that they cover a deep intellectual problem. Discourse generation must make differing uses of what the reader knows and what the reader does not know.It is absolutely essential to avoid tedious statement of "the obvious." Proper use of presupposition (which has not yet been attempted computationally) likewise depends on this knowledge, and many of the techniques for maintaining coherence depend on it as well. But identification of what is obvious to a reader is a difficult and mostly unexplored problem. Clearly, inference is deeply involved, but what is "obvious" does not match what is validly inferable. It appears that as computer-generated texts become larger the need for a robust model of the obvious will increase rapidly.This section views the collection of techniques which have been discussed so far from the point of view of a designer of a future text synthesis system. What are the design constraints which affect the possibility of particular combinations of these techniques? What combinations are advantageous? Since each system represents a compatible collection of techniques, it is only necessary to examine compatibility of the techniques of one system within the framework of the other.We begin by examining the hypothetical introduction of the KDS techniques of fragmentation, the explicit reader model, aggregation, preference scoring and hill climbing into PROTEUS. We then examine the hypothetical introduction of PROTEUS' grammar, fact significance assessments and use of the contrast heuristic into KDS. Finally we consider use of each system on the other's knowledge domain.Introducing KDS teohniques into PROTEUS Fragment and Compose is clearly usable within PROTEUS, since the information on the sequence of moves, particular move locations and the significance of each move all can be regarded as composed of many incleDendent propositions (fragments of the whole structure.) However, Fragment and Compose appears to give only small benefits, principally because the linear sequences of tic-tac-toe game transcripts give an acceptable organization and do not preclude many interesting texts.Aggregation is also useable, and would appear to allow for a greater diverSity of sentence forms than Oavey's Secluential assembly torocedures allow.In KDS, and presumably in PROTEUS as well, aggregation rules can be used to make text brief, in effect, PROTEUS already has some aggregation, since the way its uses of conjunction shorten the text is similar to effects of aggregation rules in KDS.Prefei'ence judgment and Hill climbing are interQependent in KDS.Introducing both into PROTEUS would appear to give great improvement, especially in avoiding the long awkward referring phrases which PROTEUS i=roduced. The system could detect the excessively long constructs and give them lower scores, leading to choice of shorter sentences in those cases.The Explicit Reader model could also be used directly in PROTEUS; it would not help much however, since relatively little foreknowledge is involved in any tic-tac-toe game commentary/.Introducing PROTEUS techniques into KDS Systemic Grammar could be introduced into KDS to great advantage. The KDS grammar was deliberately chosen to be rudimentary in order to facilitate exploration above the sentence level. (In fact. KDS could not be extended in any interesting way without ulxJrading its grammar.) Even with a Systemic Grammar in KDS, aggregation rules would remain, functioning as sentence design elements.Fact significance assessments are also compatible with the KDS design. As in PROTEUS they would immediately follow aoduialtion of the basic grogositianeL They could improve the text significantly.The contrast heuristic (and other PROTEUS heuristics) would fit well into KDS, not as an a priori sentence design device but as a basis for assigning preference. Higher score for contrast would improve the text.In summary, the principal techniques appear to be completely compatible, and the combination would surely produce better text than either system alone.The tic-tac-toe domain would fit early into KDS` but the KOS text-organization Drocesles (not discuased in this I:~ger) would have littJe to do. The fire crisis domain would be too complex for PROTEUS. It involves several actorS at once, several parallel contingencies and no single clear organizing principle. PROTEUS lacks the necessary text-organization methods.These systems share (with many others) the i=rimitive state of the computer.be,sad discourse-generation a~. Their groce~,~l are [=rimarily devoted to activities that go without notice among literate I~eogle. The deeper linguistic and metorical phenomena usually associated with the term "discourse" are hardly touched. These systems make little attempt at coherence, and they do not respond in amy way to the coherence (or lack of it) which they achieve. Presupposition, topic, focus, theme, the pro~er role of inference, imglicature, direct and indirect Sl:~ech act performance and a host of other relevant concepts all go unrepresented. Even wome, the 46 underlying conceotual agpars.tus in both systems is extremely adhoc and idiosyncratic, severely limiting the I=OSSiblities for using general knowledge of the semantics of English.Despite these deficiencies, the systems produce relatively smooth readable texL They are significant principally as collections of methods which may well survive to become l=arts of more competent and general.purOose systems of the future.PROTEUS and KDS are distinct collections of discourse generation techniques. There is little overlap between them, and no a~patent conflict involved in using the techniques in combination. A system which combined the strengths of both would be significantly better than either.APPENDIX: TEXT SAMPLES FROM THE T~NO SYSTEMS KDS: Fire-Alarm Text for Computer Operstors Whenever there is a fire, the alarm system is started, which soul!de a bell and starts a timer. Ninety seconds after • e timer startlk unless the alarm system is cancelled, the system calls Wells Fargo. When Wells Fargo is called, they, in turn, call the Fire Department.When you hear the alarm bell or smell smoke, stop whatever you are doing, determine whether or not there is a • fire, and decide whether to permit the ala~;m system or to cancel it. When you determine whether there is a fire, if there is, permit the alarm system, othewvise cancel iL When you i~¢mit the alarm system, call the Fire Department if possible, then evacuate. When you cancel the alarm system, if it is more then 90 seconds since the timer started, the system will have called Wells Fargo already, cth~ continue what you were doing. The game started with my taking a comer, and you took an adjacent one. I threatened you by taking the middle of the edge.opposite that and adjacent to the one which 1 had just taken but you blocked it and threatened me. I blocked your diagonal and forked you. If you had blocked mine, you would have forked me, but you took the middle of the edge oppoalte the corner which I took first and the one which you had just taken and so I won by completing my diagoned. | null | null | null | null | Main paper:
:
The task of discourse generation is to produce multisentential text in natural language which (when heard or read) produces effects (informing, motivating, etc.) and impressions (conciseness, correctness, ease of reading, etc.) which are appropriate to a need or goal held by the creator of the text.Because even little children can produce multieententiaJ text, the task of discourse generation appears deceptively easy. It is actually extremely complex, in part because it usually involves many different kinds of knowledge. The skilled writer must know the subiect matter, the beliefs of the reader and his own reasons for writing. He must also know the syntax, semantics, inferential patterns, text structures and words of the language. It would be complex enough if these were all independent bodies of knowledge, independently employed. Unfortunately, they are all interdependent in intricate ways. The use of each must be coordinated with all of the others.For Artificial Intelligence, discourse generation is an unsolved problem.There have been only token efforts to date, and no one has addressed the whole problem. Still, those efforts reveal the nature of the task, what makes it diffic;,It and how the complexities can be controlled.In comparing two AI discourse generators here we can do no more than suggest opportunities and attractive options for future exploration. Hopefully we can convey the benefits of hindsight without too much detailed description of the individual systems. We describe them only in terms of a few of the techniques which they employ, partly because these tschnk:lUes seem more vaJuable than the system designs in which they happen to have been used.The systems which we study here are PROTEUS, by Anthony Davey at Edinburgh [Davey 79] , and KDS by Mann and Moore at ISI [Mann and Moore 801. As we will see, each is severely limited and idiosyncratic in scope and technique. Comparison of their individual skills reveals some technical opportunities.Why do we study these systems rather then others? Both of them represent recent developments, in Davey's case, recently published. Neither of them has the appearance of following a hand-drawn map or some' other humanly-produced sequential presentation. Thus their performance represents capabilities of the programs more than cs4)abilities of the programmer. Also, they are relatively unfamiliar to the AI audience. Perhaps most importantly, they have written some of the best machine-produced discourse of the existing art.Rrst we identify particular techniclues in each system which contribute strongly to the quality of the resulting text. Then we compare the two Systems discussing their common failings and the possibilities for creating a system having the best of both.PROTEUS creates commentary on games of tic.tac-toe (noughts and crosses.) Despite the apparent simplicity of this task, the possibilities of producing text are rich and diverse. (See the example in Appendix .)The commentary is intended both to convey the game (except for insignificant variations of rotation and reflection), and also to convey For example:• Best move VS. Actual move: The move generators are used to compute the "best" move, which is compared to the actual one. If the move generator for the best move has higher rank than any generator proposing the actual move, then the actual move is treated as s mistake, putting the best move and the actual move in contrast..Threat VS. Block: A threat contrasts with an immediately following block. This contrast is a fixedreflex of the system. It seems accedteble to mark any goal pursuit followed by blocking of the goaJ as contrastive.Sentence scope is determined by several heuristic rules including I. Express as many contrasts as possible explicitly. (This leeds to immediate selection of words such as "but" and "however".)2. Limit sentences to ,3 clauses.3. Put as many clauses in a sentence as possible.4. Expmas only the worst of several mistakes.The main clause struotum is built before entering the grammar, Both the move characterization process and the use of contrasts as the principal ~ of sentence scope contribute a great deal to the quality of the resuRing text. However, Davey's central concern was not with these two 9rocessos but with the third one, sentence generation. 3. Unity. Since the grammar is defined in a single pubilcalion with a single 8uthomhiD, the is*ups of compatibility Of parts are minimized,It is intemsUng that Oevey does not employ the Systemk: Grlmm~lr dehvstJon rules at the highest level Although the grammer is defined in terms of the generation of sentences, Devoy entem it at the clause level with 8 sents~cs desc~Dtlon whi¢;h conforms to Systemic Grammar but was built by other means. A sentence st this level is temporal principally of Ctl-_,,~__. but the surface conjunotlens have already been chosen.Although Oavey real(as no claim, this may redrasent a gener~d result about text generation systems. Above some level of al:atnm~on in the text planning proces~ planning is not conditioned by the content of the grammar.The obvious place to exbeot planning tO become indegendertt of the grammar is at the sentence I~.But in both PROTEUS and KD.~ Operations independent of the grammar extend down to the level of independent clm within sentences. Top leve~ coniunctlons am not within such ci~,~__; so they are determined by Dlenning pr~ before the grammar is enter~l.It would be extremely awkward to implement Oavey'$ sentence s¢obe heuristics in a syetamic grammar. The formalism is not well suited for oDer~tion~ such as maximizing the total number of explicit contrastive (dements. However, the problem is not just a i~rololem with the formalism; grammars generally do not deal with this sort of operations, and so are ~oorly equil~ped to do so. them i~ no need to "rw~nm" it. G~mo~ ~ dlvid~d imo ~ Id~ ~ &¢~Iv~.mle-ll3t~lca~nL A sylmlm of choi¢.e~ (surJ1 u t~e ch~¢~ i~lt~men -d~ -ind "~mm,e" kT,~,-,~ de.minim *,vh~J~ cm "ai~-ttve') is mech~ ~ other cboP~a ~d w is ¢ondi~m¢. but cny ch~¢e. ~¢e mechU, ".. ,.m,~m~rair,~L . Ru~ S~lUenc~ ~ femunl-emL ejcn ~ tl~ "We~l." ~ml¢~ enet~le ieed¢~ m,l~¢ltu~ enl n~ tm'efenemmnL Although the computer scientist who tries to learn from [Oavey 79} will find that it presents difficulties, the underlying system is interesting enough to be worth the trouble. Devey's imDiementation generally allam~s to be orthodox, conforming to [Hudson 71 ]. Davey regularizes some of the rules toward type uniformity, and thus reduces the apparent correspondence to Hudson's formulabons. However, the linguistic babe does not appear to have been compromised by the implementation.One of the major strengths of the work is that it takes advantage of s comprehenal~, explicit and linguistically justified grammar.Text quality is also enhanced by some simple filtering (of what will be expressed) based on demmdencies between known facts. Some facts dominate otherJ in the choice of what tO Say. If them is only one move on the board having a certain significance, say "threat", then the move is described by its significance alone, e.g. "you threatened me" without location informatic, n, since the reader can infer the locations. Similarly, only the most significant defensive and offensive aspects of a move ate described even though all are known.The resulting text is divn) and of good quality. Although them ere awlo~mrdn __es,~__~ the immense advantage conferred by using a sophisticated grammar prevails.SOace precJudes a thocou0h description of KDS, but fuller deecriptione are mml~ie [Mann and Moore 80] , [Mann 79] , [Moore 7% KDS consists Of five me~r modules, as indicated in Figure 2 . A Frl~lmentM is re~oonalble for eXtnL~ing the relevant knowledge from the notation given to it and dividing that knowledge into small exl:nmalble units, which we call fragments or pmtosentance¢ A Prod=~m Solver, a goal-Oumuit engine in the AI tradition, is responsible for seeotlng the I~eUntmlm~d style of me text and ~ for iml~l~ng the grol8 ol~glmlze~Ion onto the text accordlng to m8~ style. A Knowk~ge Rater removes protasentencas that need not be expressed because they would be redundant to the medsr. The I~est and moat interesting r~__,_,~e is the Hill Climber, which has three raspon~billtis¢ tO compose complex i:rotoasntences from simpM one~ tO judge relative quality among the units resulting from compo~dtton, and to repeatedly improve the set of protosentencas on the Ioasm of those judgments so thM it is of the highest eyeful quality. Finally. s very simple Surface Sentence Maker cremes the sentences of me final text out of protoaec~lmc~.The data flow of these modules can be thought of as a simple pipeline, each module processing the relevant knowledge in turn.The principal contributors to the quality of the output text are:1. The Fragment and Compose Paradigm: The information which will be expressed is first broken down into an unorganized collection of subsententiai (¢oproximstely clause-level) propositional fragments. Each fragment is crested by methods which guarantee that it is expressible by a sentence (usually a very short one, This makes it possible to organize the remainder of the processing so that the text production problen~ is treated as an improvement problem rather than as a search for feasible solutions, a significant advantage.) The fragments are then organized and combined in the remaining processing.2. Aggregation Rules: Clause-combining patterns of English are represented in a distinct set of rules. The rules specify transactions on the set of propositional fragments and previous aggregation results. In each transection several fragments are extracted and an aggregate structure (capable of representation as a sentence) is inserted. A representative rule, named "Common Cause," shows how to combine the facts for "Whenever C then X" and "Whenever C then Y" into "Whenever C then X and Y" at s propositional level.3. Preference Assessment: Every propositional fragment or aggregate is scored using a set of scoring rules. The score represents s measure of sentence quality. The knowledge domain of KDS' largest example is a Fire Crisis domain, the knowledge of what happens when there is a fire in a computer room. The task was to cause the reader, a computer operator, to know what to do in all contingencies of fire.The most striking impression in comparing the two systems is that they have very little in common. In particular,1. KDS has sentence scoring and a quslity.based selection of I~ow to say things; PROTEUS has no counterp;u't.2. PROTEUS has a sophisticated grammar for which KOS has only a rudimentary counterpart, 3. PROTEUS has only a dynamic, redundancy-based P, nowledge filtering, whereas the filtering in KOS removes principally St=~tic, foreknown information.4. KDS has clause-combining rules which make little use of conjunctions, whereas PROTEUS has no such rules but makes elaborate use of coniunctions.5. KOS selects for brevity above all, whereas PROTEUS selects for contrast =hove all.6. PROTEUS takes great advantage of fact significance assessment, which KDS does not use.They have little in common technically, yet both produce high quality text relative to predecessors. This raises an obvious question--Could the techniques of the two systems be combined in an even more effective system?There is one prominent exception to this general lack of shared functions and characteristics, Recent text synthesis systems [Davey 79] , [Mann end Moore 80] , [Weiner 80] , [Swartout 77 ], [Swartoutthesis 81], all include a facility for keeping certain facts or ideas from being expressed. There is an implicit or explicit model of the reader's knowledge. Any knowledge which is somehow seen as obvious to the reader is suppressed.All of the implemented facilities of this sort are rudimentary; many consist only of manually-ornduced lists or marks. However, it is clear that they cover a deep intellectual problem. Discourse generation must make differing uses of what the reader knows and what the reader does not know.It is absolutely essential to avoid tedious statement of "the obvious." Proper use of presupposition (which has not yet been attempted computationally) likewise depends on this knowledge, and many of the techniques for maintaining coherence depend on it as well. But identification of what is obvious to a reader is a difficult and mostly unexplored problem. Clearly, inference is deeply involved, but what is "obvious" does not match what is validly inferable. It appears that as computer-generated texts become larger the need for a robust model of the obvious will increase rapidly.This section views the collection of techniques which have been discussed so far from the point of view of a designer of a future text synthesis system. What are the design constraints which affect the possibility of particular combinations of these techniques? What combinations are advantageous? Since each system represents a compatible collection of techniques, it is only necessary to examine compatibility of the techniques of one system within the framework of the other.We begin by examining the hypothetical introduction of the KDS techniques of fragmentation, the explicit reader model, aggregation, preference scoring and hill climbing into PROTEUS. We then examine the hypothetical introduction of PROTEUS' grammar, fact significance assessments and use of the contrast heuristic into KDS. Finally we consider use of each system on the other's knowledge domain.Introducing KDS teohniques into PROTEUS Fragment and Compose is clearly usable within PROTEUS, since the information on the sequence of moves, particular move locations and the significance of each move all can be regarded as composed of many incleDendent propositions (fragments of the whole structure.) However, Fragment and Compose appears to give only small benefits, principally because the linear sequences of tic-tac-toe game transcripts give an acceptable organization and do not preclude many interesting texts.Aggregation is also useable, and would appear to allow for a greater diverSity of sentence forms than Oavey's Secluential assembly torocedures allow.In KDS, and presumably in PROTEUS as well, aggregation rules can be used to make text brief, in effect, PROTEUS already has some aggregation, since the way its uses of conjunction shorten the text is similar to effects of aggregation rules in KDS.Prefei'ence judgment and Hill climbing are interQependent in KDS.Introducing both into PROTEUS would appear to give great improvement, especially in avoiding the long awkward referring phrases which PROTEUS i=roduced. The system could detect the excessively long constructs and give them lower scores, leading to choice of shorter sentences in those cases.The Explicit Reader model could also be used directly in PROTEUS; it would not help much however, since relatively little foreknowledge is involved in any tic-tac-toe game commentary/.Introducing PROTEUS techniques into KDS Systemic Grammar could be introduced into KDS to great advantage. The KDS grammar was deliberately chosen to be rudimentary in order to facilitate exploration above the sentence level. (In fact. KDS could not be extended in any interesting way without ulxJrading its grammar.) Even with a Systemic Grammar in KDS, aggregation rules would remain, functioning as sentence design elements.Fact significance assessments are also compatible with the KDS design. As in PROTEUS they would immediately follow aoduialtion of the basic grogositianeL They could improve the text significantly.The contrast heuristic (and other PROTEUS heuristics) would fit well into KDS, not as an a priori sentence design device but as a basis for assigning preference. Higher score for contrast would improve the text.In summary, the principal techniques appear to be completely compatible, and the combination would surely produce better text than either system alone.The tic-tac-toe domain would fit early into KDS` but the KOS text-organization Drocesles (not discuased in this I:~ger) would have littJe to do. The fire crisis domain would be too complex for PROTEUS. It involves several actorS at once, several parallel contingencies and no single clear organizing principle. PROTEUS lacks the necessary text-organization methods.These systems share (with many others) the i=rimitive state of the computer.be,sad discourse-generation a~. Their groce~,~l are [=rimarily devoted to activities that go without notice among literate I~eogle. The deeper linguistic and metorical phenomena usually associated with the term "discourse" are hardly touched. These systems make little attempt at coherence, and they do not respond in amy way to the coherence (or lack of it) which they achieve. Presupposition, topic, focus, theme, the pro~er role of inference, imglicature, direct and indirect Sl:~ech act performance and a host of other relevant concepts all go unrepresented. Even wome, the 46 underlying conceotual agpars.tus in both systems is extremely adhoc and idiosyncratic, severely limiting the I=OSSiblities for using general knowledge of the semantics of English.Despite these deficiencies, the systems produce relatively smooth readable texL They are significant principally as collections of methods which may well survive to become l=arts of more competent and general.purOose systems of the future.PROTEUS and KDS are distinct collections of discourse generation techniques. There is little overlap between them, and no a~patent conflict involved in using the techniques in combination. A system which combined the strengths of both would be significantly better than either.APPENDIX: TEXT SAMPLES FROM THE T~NO SYSTEMS KDS: Fire-Alarm Text for Computer Operstors Whenever there is a fire, the alarm system is started, which soul!de a bell and starts a timer. Ninety seconds after • e timer startlk unless the alarm system is cancelled, the system calls Wells Fargo. When Wells Fargo is called, they, in turn, call the Fire Department.When you hear the alarm bell or smell smoke, stop whatever you are doing, determine whether or not there is a • fire, and decide whether to permit the ala~;m system or to cancel it. When you determine whether there is a fire, if there is, permit the alarm system, othewvise cancel iL When you i~¢mit the alarm system, call the Fire Department if possible, then evacuate. When you cancel the alarm system, if it is more then 90 seconds since the timer started, the system will have called Wells Fargo already, cth~ continue what you were doing. The game started with my taking a comer, and you took an adjacent one. I threatened you by taking the middle of the edge.opposite that and adjacent to the one which 1 had just taken but you blocked it and threatened me. I blocked your diagonal and forked you. If you had blocked mine, you would have forked me, but you took the middle of the edge oppoalte the corner which I took first and the one which you had just taken and so I won by completing my diagoned.
Appendix:
| null | null | null | null | {
"paperhash": [
"moore|a_snapshot_of_kds._a_knowledge_delivery_system",
"brachman|a_structural_paradigm_for_representing_knowledge.",
"swartout|a_digitalis_therapy_advisor_with_explanations",
"swartout|producing_explanations_and_justifications_of_expert_consulting_programs",
"mann|computer_generation_of_multiparagraph_english_text",
"mann|computer_as_author_--_results_and_prospects.",
"halliday|cohesion_in_english",
"huddleston|the_sentence_in_written_english:_a_syntactic_study_based_on_an_analysis_of_scientific_texts",
"rayees|artificial_intelligence_in"
],
"title": [
"A Snapshot of KDS. A Knowledge Delivery System",
"A Structural Paradigm for Representing Knowledge.",
"A Digitalis Therapy Advisor with Explanations",
"PRODUCING EXPLANATIONS AND JUSTIFICATIONS OF EXPERT CONSULTING PROGRAMS",
"Computer Generation of Multiparagraph English Text",
"Computer as Author -- Results and Prospects.",
"Cohesion in English",
"The Sentence in Written English: A Syntactic Study Based on an Analysis of Scientific Texts",
"Artificial Intelligence In"
],
"abstract": [
"SUMMARY KDS Is a computer program which creates multl-par~raph, Natural Language text from a computer representation of knowledge to be delivered. We have addressed a number of Issues not previously encountered In the generation of Natural Language st the multi-sentence level, vlz: ordering among sentences and the scope of each, quality comparisons between alternative 8~regations of sub-sententJal units, the coordination of communication",
"Abstract : This report presents on associative network formalism for representing conceptual knowledge. While many similar formalisms have been developed since the introduction of the semantic network in 1966, they have often suffered from inconsistent interpretation of their links, lack of appropriate structure in their nodes, and general expressive inadequacy. In this paper, we take a detailed look at the history of these semantic nets and begin to understand their inadequacies by examining closely what their representational pieces have been intended to model. Based on this analysis, a new type of network is presented - the Structured Inheritance Network (SI-NET) - designed to circumvent common expressive shortcomings.",
"This paper describes the English explanation facility of the OWL Digitalis Advisor, a program designed to advise physicians regarding digitalis therapy. The program is written in OWL, an English-based computer language being developed at MIT. The system can explain, in English, both the methods it uses and how those methods were applied during a particular session. In addition, the program can explain how it acquires information and tell the user how it deals with that information either in general or during a particular session.",
"Traditional methods for explaining programs provide explanations by converting to English the code of the program or traces of the execution of that code. While such methods can provide adequate explanations of what the program does or did, they typically cannot provide justifications of the code without resorting to canned-text explanations. That is, such systems cannot tell why what the system is doing is a reasonable thing to be doing. The problem is that the knowledge required to provide these justifications is needed only when the program is being written and does not appear in the code itself. In the XPLAIN system, an automatic programming approach is used to capture some of the knowledge necessary to provide these justifications. The XPLAIN system uses an automatic programmer to generate the consulting program by refinement from abstract goals. The automatic programmer uses a domain model, consisting of facts about the application domain, and a set of domain principles which drive the refinement process forward. By keeping around a trace of the execution of the automatic programmer it is possible to provide justifications of the code using techniques similar to the traditional methods outlined above. This paper discusses the system described above and outlines additional advantages this approach has for explanation.",
"This paper reports recent research into methods for creating natural language text. A new processing paradigm called Fragment-and-Compose has been created and an experimental system implemented in it. The knowledge to be expressed in text is first divided into small propositional units, which are then composed into appropriate combinations and converted into text.KDS (Knowledge Delivery System), which embodies this paradigm, has distinct parts devoted to creation of the propositional units, to organization of the text, to prevention of excess redundancy, to creation of combinations of units, to evaluation of these combinations as potential sentences, to selection of the best among competing combinations, and to creation of the final text. The Fragment-and-Compose paradigm and the computational methods of KDS are described.",
"Abstract : For a computer program to be able to compose text is interesting both intellectually and practically. Artificial Intelligence research has only recently begun to address the task of creating coherent texts containing more than one sentence. One recent research has produced a new paradigm for organizing and expressing information in text. This paradigm, called Fragment-and-Compose, has been used in a pilot project to create texts from semantic nets. The method involves dividing the given body of information into many small propositional units, and then combining these units into smooth coherent text. So far the largest example written by Fragment-and-Compose has been two paragraphs of instruction about what a computer operator should do in case of indications of a fire. This report describes the text generation problem and anticipates a specific way to disseminate and use technical developments. It presents the research that led to creation of Fragment-and-Compose, including the largest example of computer-produced text. It also discusses the immediate problems and difficulties of elaborating Fragment-and-Compose into a general and powerful method. (Author)",
"Cohesion in English is concerned with a relatively neglected part of the linguistic system: its resources for text construction, the range of meanings that are speciffically associated with relating what is being spoken or written to its semantic environment. A principal component of these resources is 'cohesion'. This book studies the cohesion that arises from semantic relations between sentences. Reference from one to the other, repetition of word meanings, the conjunctive force of but, so, then and the like are considered. Further, it describes a method for analysing and coding sentences, which is applied to specimen texts.",
"Preface 1. Introduction 2. Mood 3. Transitivity and voice 4. Complementation 5. Relativisation 6. Comparison 7. The Modal Auxiliaries 8. Theme Appendix: sources of the corpus References Index.",
"It goes without saying that coronavirus (COVID-19) is an infectious disease and many countries are coping with its different variants. Owing to the limited medical facilities, vaccine and medical experts, need of the hour is to intelligently tackle its spread by making artificial intelligence (AI) based smart decisions for COVID-19 suspects who develop different symptoms and they are kept under observation and monitored to see the severity of the symptoms. The target of this study is to analyze COVID-19 suspects data and detect whether a suspect is a COVID-19 patient or not, and if yes, then to what extent, so that a suitable decision can be made. The decision can be categorized such that an infected person can be isolated or quarantined at home or at a facilitation center or the person can be sent to the hospital for the treatment. This target is achieved by designing a mathematical model of COVID-19 suspects in the form of a multi-criteria decision making (MCDM) model and a novel AI based technique is devised and implemented with the help of newly developed plithogenic distance and similarity measures in fuzzy environment. All findings are depicted graphically for a clear understanding and to provide an insight of the necessity and effectiveness of the proposed method. The concept and results of the proposed technique make it suitable for implementation in machine learning, deep learning, pattern recognition etc."
],
"authors": [
{
"name": [
"James A. Moore",
"W. Mann"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"R. Brachman"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"W. Swartout"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"W. Swartout"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"W. Mann",
"James A. Moore"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"W. Mann",
"James A. Moore"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"M. Halliday",
"R. Hasan"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"R. Huddleston"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"Muhammad Rayees",
"Usman Ahmad",
"Afzal"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
}
],
"arxiv_id": [
null,
null,
null,
null,
null,
null,
null,
null,
null
],
"s2_corpus_id": [
"41559124",
"58814991",
"8761024",
"61001771",
"112842",
"60897235",
"62192469",
"60662038",
"1411037"
],
"intents": [
[],
[],
[],
[],
[],
[],
[],
[],
[]
],
"isInfluential": [
false,
false,
false,
false,
false,
false,
false,
false,
false
]
} | Problem: The paper addresses the complexity of discourse generation, highlighting the challenges involved in producing multisentential text that effectively conveys information and impressions appropriate to the creator's goal.
Solution: The paper proposes that discourse generation is a highly intricate task due to the interdependence of various knowledge domains such as subject matter, reader beliefs, syntax, semantics, inferential patterns, and text structures. The hypothesis is that discourse generation in Artificial Intelligence remains an unsolved problem, with the need for coordination among different types of knowledge to achieve high-quality text generation. | 524 | 0.013359 | null | null | null | null | null | null | null | null |
eea3b637ba9628f608623733b608919051930214 | 61359265 | null | A Grammar and a Lexicon for a Text-Production System | In a text-produqtion system high and special demands are placed on the grammar and the lexicon. This paper will view these comDonents in such a system (overview in section 1). First, the subcomponente dealing with semantic information and with syntactic information will be presented se!:arataly (section 2). The probtems of relating these two types of information are then identified (section 3). Finally, strategies designed to meet the problems are proDose¢l and discussed (section 4). One of the issues that will be illustrated is what happens when a systemic linguistic approach is combined with a Kt..ONE like knowledge representation • a novel and hitherto unexplored combination] | {
"name": [
"Matthiessen, Christian M.I.M."
],
"affiliation": [
null
]
} | null | null | 19th Annual Meeting of the Association for Computational Linguistics | 1981-06-01 | 16 | 24 | null | null | null | null | This gaper will view a grammar and a lexicon as integral parts of a text production system (PENMAN). This perspective leads to certain recluirements on the form of the grammar and that of the eubparts of the lexicon and on the strategies for integrating these components with each other and with other parts of the system. In the course of the I~resentstion of the componentS, the subcomDonents and the integrating strategies, these requirements will be addressed. Here I willgive a brief overview of the system.PENMAN is a successor tO KDS ([12] , [14] and [13] ) and is being created to produce muiti.sentential natural English text, It has as some of its componentS a knowledge domain, encoded in a KL.ONE like representation, a reader model, a text-planner, a lexicon, end a Sentence generator (called NIGEL). The grammar used in NIGEL is a Systemic Grammar of English of the type develol:~d by Michael Halliday• -see below for references.For present DurOoses the grammar, the lexic,n and their environment can be represented as shown in Figure 1 .The lines enclose setS; the boxes are the linguistic compenents. The dotted lines represent parts that have been develoDed independently of the I~'esent project, but which are being implemented, refined and revised, and the continuous lines represent components whose design ill being developed within the project.The box labeled syntax stands for syntactic information, both of the general kind that iS needed to generate structures (the grammar;, the left part of the box) and of the more Sl~=cific kind that is needed for the syntactic definition of lexical items (the syntactic subentry of lexical items; to the right in the box --the term lexicogrammar can also be uasd to denote both ends of the box).1Thitl reBe•rcti web SUOl~fled by the Air Force Office of Scientific Re~lllrrJ1 contract NO. F49620-7~-¢-01St, The view~ and ¢OIX:IuIIonI contained in this document Me thoe~ of the author and ~ould not be intemretKI u neceB~mly ~tJ~ ~ official goli¢iee or e~clors~mcm=, either e;~ore~ or im~isd. Of the Air FOrCAI Office of .~WIO R~rch ot the U.S. Government. The reeea¢ch re~t~ • joint effort end so ao tt~ =tm~ming from it whicti are the sub, tahoe Of this ml~'. I would like to thank in p~rt~cull=r WIIIklm MInn, who tieb helped i1~ think, given n~e ~ h~l ideaa sugg~o~l and commented extensively on dr.Jft= of th@ PaDre3, without him it ~ not be. I am ~ gretefu| tO Yeeutomo Fukumochi for he~p(ul commcmUI On I dran end to Michael Hldlldey, who h~ mecle clear to m@ rmmy sylRemz¢ i:~n¢iOl~ end In=Ught~ N L ~i i i i ::i ::i i i i ĩi i !i i ĩ::~:::.::ĩi ĩi ĩ:.:::.:::.i :.ĩ General Specific our general conceptual organization of the world around us and our own inner world; it is the linguistic part o! conceptuals. For the lexicon this means that lexical semantics is that part of conceptuals which has become laxicalized and thus enters into the structure of the vocabulary.There is also a correlation between conceptual organization and the organization of part of the grammar.The double arrow between the two boxes represents the mapping (realization or encoding) of semantics into syntax. For example, the concept SELL is mapped onto the verb sold?The grammar is the general Dart of the syntactic box, the part concerned with syntactic structures. The /exicon CUts across three levels: it has a semantic part, a syntactic part (isxis) and an orthographic part (or spelling; not present in the figure)? The lexicon 21 •m ul~ng the genec=l convention of cagitllizing terms clattering semantic entree=. C.~tak= will also i~l ueBd fo¢ rom~ aJmocieteo with conce~13 (like AGENT. RECIPIENT lu~ OI~ECT~ and for gcamm~ktical functions (like ACTOR. BENEFICIARY and GOAL). These notions will be introduced below.3This me~m= that an ~ fo¢ a lexical item ¢on~L~ts of three sureties...4¢ i eBmlmtic wltry, • syrltacti¢ entry anti an orttlogrlkOhi¢ ontry. The lexicon box ~ ~howtt •~ containing g4e~l Of ~ syntax and secmlntic=l in the figt~te (ttiQ s~l~ area) to ern~lBize t~ nal~re of the isxicaJ entry, consists entirely of independent lexical entries, each representing one lexicai item (t'ypicaJly a word). This figure, then, represents the i~art of the PENMAN text production system that includes the grammar, the lexicon and their immediate environment.PENMAN is at the design stage; conse¢lUantiy the discussinn that follows is tentative end exploratory rather than definitive. --The ¢om!=onant that has advanced the farthest is the grammar. It has been implemented in NIGEL, the santo nee generator mentioned above. It has been tested and is currently being revised and extended. None of the other components (those demarcated by continuous lines) have been implemented; they have been tested only by way of hand examples. This groat will concentrate on the design features of the grammar rather than on the results of the implementation and testing of it.One of the fundamental properties of the KL-ONE like knowledge representation (KR) is its intensional --extensional distinction, the distinction between a general conceptual taxonomy and a second part of the representation where we find individuals which can exist, states of affairs which may be true etc. This is roughly a disbnction t:~ltween what is conceptuaiizaDle and actual conceptualizations (whether they are real or hypothetical). In the overview figure in section 1, the two are together called conceptuals.For instance, to use an example I will be using throughout this paper, there is an inteflsional concept SELL, about which no existence'D or location in time is claimed. An intenalonal concept is related to extensional concede by the relation Inclividuates: intenaionai SELL is related by individual instances of extensional SELLs by the Individuates relation. If I know that Joan sold Arthur ice-cream in the I~!rk, I have s SELL fixed in time which is part of an assertion about Joan and it Indiviluates intenaional SELL. 4 A concept has internal structure: it is a configuration of roles. The concept SELL has an internal ~re which is the three roles associated with it, viz. AGENT (the seller), RECIPIENT (the buyer) and OBJECT. These rolee are slot3 which are filled by other concepts and the domains over which these can very are defined as value restrictions. The AGENT of SELL is a PERSON or a FRANCHISE and sO on. tn ~,ther words, a ¢oncel~t is defined by its relation to other concepts (much aS in European structuraiism). These relations are roles a'~sociated with the concept, roles whose fillers are other concept¢ This gives rise to a large conceptual net.There is another reiation which helps define the place of a conoe=t in the conceptual net. viz. SuperCategory, which gives the conceptual net a taxonomic (or hierarchic) structure in addition to the structure defined by the role relations. A semantic entry, than, is a concept in the conceptuais-For sold, we find soil wiffi its associated roles, AGENT, RECIPIENT and OBJECT.The right ~ of figure 4.1 below (marked "se:'; after a figure from [1] gives a more detailed semantic ent~ for sold: = pointer identifies the relevant part in the KR, the concept that constitutes the semantic entry (here the concept SELL).The concept that constitutes the semantic entry of a lexicai item has a fairly rich structure. Roles are associated "with the concept and the modailty (neces~ury or optional), the ¢ardinaii~ of and restrictions on (value of) the fillers are given. Both the Sul~H'Category relation and the Indiviluates relation provide ways of walking around in the KR to find expresmons for concepts. If we are in the extensional part of the KR, looking at a particular individual, w~ can follow the Individuates link up to an intensional concept. There may be a word for it, in which case the concept is part of a laxical entry. If there is no word for the concept, we will have to consider the various options the grammar gives us for forming an ¢oPropriate exoressJon.The general assumption is that all the intensional vocabulary can he used for extensional concepts in the way just describe(l: exc)reasabi..,'y is inherited with the Individuates relation.Expression candidates for concepts can also be located along the SuberCate(Jory link by going from one concept to another one higher up in the taxonomy. Consider the following example: Joan sold Arthur ice.cream. The transaction took place in tl~e perk. The SuperCate~ory link enables us to go from SELL to TRANSACTION, where we find the expression transaction.The structure of the vocabulary is parasitic on the conceptual structure.In other words, laxicalized concepts are related not only to one another, but also to concepts for which there is no word,encoding in English (i.e. non-laxicalized concepts).Crudely, the semantic structure of the lexicon can be described as being part of the hierarchy of intensional concepts --the intensional concepts that happen to be lexicalized in English. --The structure of English vocabulary is thus not the only principle that is reflected in the knowledge representation, but it is reflected. Very general concepts like OBJECT, THING and ACTION are at the top. In this hierarchy, roles are inherited. This corresponds to the semantic redundancy rules of a lexicon.Considering the possibility of walking around in the KR and the integration of texicalized and non.iexicalized concepts, the KR suggests itself as the natural place to state certain text-forming principles, some of which have been described under the terms lexical cohesion ([8] ) and Thematic Progression ( [6] ).I will now turn to the syntactic component in figure 1-1, starting with a brief introduction to the framework (Systemic Linguistics) that does the same for that component as the notion of semantic net did for the component just discussed. The systemic tradition recognizes a fundamental principle in the organization of language: the distinction between cl~oice and the structures that express (realize) choices. Choice is taken as primary and is given special recC,;]nition in the formalization of the systemic model of language. Consequently, a description is a specification of the choices a speaker can make together with statement:; about how he realizes a selection he has made. This realization of a set of choices is typically linear, e.g. a string of words. Each choice point is formalized as a ,system (hence the name Systemic). The options open to the speaker are two or more features that constitute alternatives which can' be chosen. The preconditions for the choice are entry conciitiona to the system. Entry conditions are logical expressions whose elementary terms are features.All but one of the systems have non.emt~/ entry conditions. This causes an interdependency among the systems with the result that the grammar of English forms one network of systems, which cluster when a feature in one system is (part of) the entry condition to another system. This dependency gives the network depth: it starts (at its "root") with very general choices. Other systems of choice depend on them (i.e. have a feature from one of these systems --or st combination of features from more than one system .. as entry conditions) so that the systems of choice become less general (more delicate to use the, systemic term) as we move along in the network.The network of systems is where the control of the grammar resides, its non.deterministic part. Systemic grammar thus contrasts with many other formalisms in that choice is given explicit representation and is captured in a single ruis type (systems), not distributed over the grammar as e.g. optional rules of different types. This property of systemic grammar makes it s very useful component in a text-production system, seDecially in the interf3ce with semantics and in ensuring accessibility of alternatives.The rest of the grammar is deterministic .. the consequences of features chosen in the network of systems. These conse(luences are formalized as feature realization statements whose task is to build the appropriate structure.For example, in independent indicative sentences, English offers a choice between declarative and interroaative sentences, if interrooativ~ is chosen, this leeds to a dependent system with a choice between wh-intsrrooative and ves/no-interroaative. When the latter is chosen, it is realized by having ~.he FINITE verb before the SUBJECT.Since it is the general design of the grammar that is the focus of attention, I will not go through the algorithm for generating a sentence as it has been implemented in NIGEL. The general observation is that the results are very encouraging, although it is incomplete. The algorithm generates a wide range of English structures correctly. There have not been any serious problems in implementing a grammar written in the systemic notation. The structure consists of three layers of function symbols, aJl of which are needed to get the result desired... The structure is not only functional (with-function s/m/ools laloeling the const|tuents instead of category names like Noun Phrase and Verb Phrase) but it is multifunctional.Each layer of function symbols shows a particular perspective on the clause structure. Layer [1] gives the aspect of the sentence as a representation of our experience. The second layer structures the sentence as interaction between the speaker and the hearer;, the fact that SUBJECT precedes FINITE signals that.the speaker is giving the hearer information. Layer [3] represents a structuring of the clause as a message; the THEME is its starting point. The functions are called experiential, inte~emonal and textual resm~-~Jvety in the systemic framework: the function symbols are said to belong to three different metafunctions, in the rest of the !~koar I will concentrate on the experiential metafunction, I=artiy because it will turn out to be highly relevant to the lexicon.The syntactic sut3entry.In the systemic tradition, the syntactic part of the lexicon is seen as a continuation of grammar (hence the term lexicogrammar for both of them): lsxical choices are simply more detailed (delicate) than grammatical choices (cf.[9]). The vocabulary of English can be seen as one huge taxonomy, with Roget's Thesaurus as a very rough model.A taxonomic organization of the relevant Dart of the vocabulary of English is intended for PENMAN, but this Organization is part of the conceptual organization mentioned al0ove. There is st present no separate lexicai taxonomy.The syntactic subentry potentially con~sts of two parts. There is alv~ye the class specification .. the lexical features. This is a statement of the grammatical potential of the lexicai item, i.e. of how it can be used grammatically. For sold the'ctas,~ specification is the following:verb C'/I1~ |0 c~als 02 bemlf &ct, 1rewhere "benefactive" says that sold can occur in a sentence with a BENEFICIARY, "class 10" that it encodes a material pr~ (contrasting with mental, varbai and relational processes) and "CMas 02" that it is a tnmaltive verb.In ~ldition, there is a provision for a configurationai part, which is a h'agment of a Structure the grammar can generate, more specifically the experiential part of the grammar, s The structure corresponds to the top layer ( # [1]) in the example above. In reference to this example, I can make more explicit wh~ I mean by fragment. The general point is that (to take just one cimm as an example) the presence and cflara~er of functions like ACTOR, BENEFICIARY and GOAL .-diract t:~'ticiplmts in the event denoted by the verb .-depend on the type of verb, whereas the more circumstantial functions like LOCATION remain unaffected and a~oDlical=ie to all ~ of verb. Conse(luently, the information about the poasibilib/ of having a LOCATION constituent is not the type of information that has to be stated for specific lsxical items. The information given for them concerns only a fragment of the experiential functional structure. This says that sold Can occur in a fragment of a struCtUre where it is PROCESS and there can be an ACTOR, a GOAL and a RENEF1CIARY.The usefulness of the structure fragment will be demonstrated in section 4.I will now turn to the fundamental proiolem of making a working s/stem out of the parts that have been discu~md.The problem ~ two parts to it. viz.1. the design of the system as a system with int.egrated Darts and 2. the implementation of the system. I will only be concerned with the 6rat aspect here.The components of the system have been presented. What remains -. and that is the problem --is to dealgn the misalng [inks; tO find the strategies that will do the job of connecting the components.Finding these strategies is a design problem in the following sense. The stnUegies do not come as accessories with the frameworks we have uasd (the systemic framework and the KL-ONE inspired knowledge reprasentatJon). Moreover, th~me two frameworks stem from two quite dispm'ate traditions with different sets of goals, symbols and terms.I will state the problem for the grammar first and then for the lexicon. As it has been presented, the grammar runs wik:l and free. It is organized Mound choice, to be sure, but there is nothing to relate the choices to the rest of the Wstem, in particular to what we can take to be semantics. The lexicon incorporates the problem of finding an ¢opropriate strategy to link the components to each other, since it cuts acrosa component boundn,des. The semantic and s/ntsctic subpaJts of a lexica| entry have been outlined, but nothing hall been sak:l about how they should be matched up with one ,.,nother. The reason why this match is not ~rfectly straightforward has to do with the fact that both entries may be sa'uctunm (conf,~urations) rather than s~ngle elements. In sedition, there are lexical relations that have not been accounted for yet, es~lcially synonymy and polysemy. The control of the grammar resides in the n.etwork of systems. Choice experts can be developed to handle the choices in these systems.The idea is that there is an expert for each system in the network and that this expert knows what it takes to make a meaningful choice, what the factors influencing its choice are. it has at its disposal a table which tells it how to find the relevant pieces of information, which are somewhere in the knowledge domain, the text plan or the reader model.In other words, the part of the grammar that is related to Semantics is the part where the notion of choice is: the choice experts know about the Semantic consequences of the various choices in the grammar and do the job of relating syntcx tO semantics, sThe recognition of different functional componenta of the grammar relates to the multi-funCtional character of a structure in systemic grimmer I mentioned in relsUon to the example In the park Joan sold Arthur ice.cream in section 2.2. The organization of the sentence into PROCESS, ACTOR, BENEFICIARY, GOAL, and LOCATIVE is an organization the grammar impeses on our experience, and it is the aspect of the organization of the Sentence that relates to the conceptual organization of the knowledge domain: it is in terms of this organization (and not e.g. SUBJECT, OBJECT, THEME and NEW INFORMATION) that the mapping between syntax and semlmtic,,i can be stated... The functional diver~ty Hailiday has provided for systemic grammar is useful in a text.production .slrstam; the other functJone find uses which space does note permit a discuesion of here.Pointers from cJonslituents.In order for the choice experts to be able to work, they must know where to look. Resume that we are working on in the park in our example Sentence in the park Joan sold Arthur ice.cream and that an expert has to decide whether park should be definite or not. The information about the status in the mind of the reader of the concept corre~oonding to park in this sentence is located at this conce~t: the ~ck is to ~mociats the concept with the constituent being built. In the example structure given earlier, in the park is both LOCATION and THEME, only the former of which is relevant to the present problem. The solution is to set a pointer to the relevant extensional concept when the function symbol LOCATION is inserted, so that LOCATION will carry the pointer and thus make the information attached to the concept 8ccaesible.I have already inb-oducad the semantic subentry and the syntactic • ubentry. They are stated in a KL-ONE like representation and a systemic notation respec~vely. The queslion now is how to relate the two.In the knowledge representation the internal struc~Jre of a concept is a configuration of roles and these roles lead to new concepts to which the concept is related. A syntactic structure is seen as a configuration of / function symbols; syntactic categories serve these functions --in the generation of a structure the functions lead to an entry of a part of the network. For example, the function ACTOR leads to a part of the network whoSe entry feature is Nominal Group just ~s the role AGENT (of SELL) leads to the concept that is the filler of it. The parallel between the two representations in this area are the following: where the previously discussed semantic and syntactic subentries are repeated and paired off against each other.This full lexical entry makes clear the usefulness of the second part of the syntactic entry .. the fragment of the experiential functional structure in which sold can be the PROCESS.Another piece of the total picture siso falls into place now. The notion of a pointer from an experiential function like BENEFICIARY in the grammatical structure to a point in the conceptual net was introduced above. We can now see how this pointer may be Set for individual lexical items: it is introduced as a simple relation between a grammatical function symbol and s conceptual role in the iexical entry of e.g. SELL.Since there is an Indlviduates link between this intensionai concept and any extensional SELL the extensional concept that is part of the particular proposition that is being encoded grarnmaticaJly, the pointer is inherited and will point to a role in the extensional part of the knowledge domain.At this point, I will refer again to the figure below, whose dght half I have already referred to as a full example of a semantic subentry ("see")."sp:" is the spelling or orthographi c subentry; "gee" is the syntactic s,,bentry.We have two configurations in the lexical ent~'y: in the Semantic subentry the concept plus a number of roles and in the syntactic subentry a number of grammaticsi functions. The match is represented in the.f_i~ure abov e by the arrows. FIgure 4-1: Lexical entry for sold in the first step I introduced the KL-ONE like knowledge representation All three roles of SELL have the modaJity "r~c~___,~_~'. This does not dictate the grammatical pos.~bilities. The grammar in Nigei offers a choice between e.g. They sold many books to their customers and The book sold well, In the second example, the grammar only Dicks out a subset of the roles of SELL for expras~on. In other words, the grammar makes the adoption of different persl~¢tives possible. II I can now return to the ol:~ervation that the functional diversity Hallidey has provldat for systemic grammar is useful for our pu~__o' -' e~-__; The fact that grammatical structure is multi.layered means that those aspects of grammatical structure that are relevant to the mapping between the two lexical entries are identified, made explicit (as ACTOR BENEFICIARY etc.) and kept seperate from pdnciplas of grernmatical structuring that are not directly relevant to this mapl:dng (e.g. SUBJECT, NEW and THEME).In conclusion, a stretegy for accounting for synonymy and polysemy can be mentioned.The way to cagture synonymy is to allow a concept to be the semantic subentry for two distinct orthographic entries. If the items are syntactically identical as well. they will also share a syntactic subentry.Polyeemy works the other way:. there may be more than one concept for the same syntactic subentry. | I have discus.s~l a gremmm" and a lexicon for PENMAN in two steps. F~rst I looked at them a~ independent components --the semantic entry, the grammar and the syntactic entry --and then, after identifying the problems of integrating them into a system, I tumed to strategies for re!sting the grammar to the conceptual representation and the syntactic entry to the semantic one within the lexicon. and the systemic notation and indicated how their design features can be Out to good use in PENMAN. For instance, the distinction between intension and exten*on in the knowledge representation makes it I~OS.~ble to let iexical semantic~ be part of the conceptuals. It was also suggested that the relations SuberC.,at~gory and Indivlduates can be to find expre~-~ions for a particular concept.The second steO attempted to connect the grammar to semantics through the notion of the choice expel, making use of a design principle of systemic grammars where the notion of choice is taken as ba~c. I pointed out the correlation between the structure of a concept and the notion of structure in the systemic framework and allowed how the two can be matched in a lexical entry and in the generation of a sentence, a slrstegy that could be adopted because of the multl.funotional nature of structure in systemic grammars. This second step has been at the same time an attempt to start exploring the potential of a combination of a KL-ONE like representation and a Sy~emic Grammar. Although many ~%oects have had to be left out of the discussion, there are s number of issues that are of linguistic interest and significance.The most basic one is perhal~ the task itself:, designing • model where a grammar and a lexicon can actually be mate to function as more than just structure generators. One issue reiatat to this that has been brought uD was that different ~ external to the grammar find resonance in different I=ari~ of the grammar and that there is a partial correlation between tim conceptual structure of the knowleclge reOresentation and the grammar and lexicon.AS was empha.~zacl in the introduction, PENMAN is at the design stage: there is a working sentence generator, but the other 8.qDect~ of what has been di$cut~tecl have not been imDlement~l and there is no commitment yet to a frozen design. Naturally, a large number of problems still await their solution, even at the level of design and, cleerly, many of them will have to wait. For example, selectivity among terms, beyond referential acle¢luacy, is not adclressecl. In general, while noting correlations between linguistic organization and conceptual organization, we do not want the relation tO be deterministic: part of being a good varbaiizar is being able to adopt different viewpoints --verbalize the same knowledge in different ways. This is clearly an ares for future research. Hopefully, ideas such as grammars organized around choice and cl~oice experts will ;)rove useful tools in working out extensions. | Main paper:
the place of a grammar and a lexicon in penman:
This gaper will view a grammar and a lexicon as integral parts of a text production system (PENMAN). This perspective leads to certain recluirements on the form of the grammar and that of the eubparts of the lexicon and on the strategies for integrating these components with each other and with other parts of the system. In the course of the I~resentstion of the componentS, the subcomDonents and the integrating strategies, these requirements will be addressed. Here I willgive a brief overview of the system.PENMAN is a successor tO KDS ([12] , [14] and [13] ) and is being created to produce muiti.sentential natural English text, It has as some of its componentS a knowledge domain, encoded in a KL.ONE like representation, a reader model, a text-planner, a lexicon, end a Sentence generator (called NIGEL). The grammar used in NIGEL is a Systemic Grammar of English of the type develol:~d by Michael Halliday• -see below for references.For present DurOoses the grammar, the lexic,n and their environment can be represented as shown in Figure 1 .The lines enclose setS; the boxes are the linguistic compenents. The dotted lines represent parts that have been develoDed independently of the I~'esent project, but which are being implemented, refined and revised, and the continuous lines represent components whose design ill being developed within the project.The box labeled syntax stands for syntactic information, both of the general kind that iS needed to generate structures (the grammar;, the left part of the box) and of the more Sl~=cific kind that is needed for the syntactic definition of lexical items (the syntactic subentry of lexical items; to the right in the box --the term lexicogrammar can also be uasd to denote both ends of the box).1Thitl reBe•rcti web SUOl~fled by the Air Force Office of Scientific Re~lllrrJ1 contract NO. F49620-7~-¢-01St, The view~ and ¢OIX:IuIIonI contained in this document Me thoe~ of the author and ~ould not be intemretKI u neceB~mly ~tJ~ ~ official goli¢iee or e~clors~mcm=, either e;~ore~ or im~isd. Of the Air FOrCAI Office of .~WIO R~rch ot the U.S. Government. The reeea¢ch re~t~ • joint effort end so ao tt~ =tm~ming from it whicti are the sub, tahoe Of this ml~'. I would like to thank in p~rt~cull=r WIIIklm MInn, who tieb helped i1~ think, given n~e ~ h~l ideaa sugg~o~l and commented extensively on dr.Jft= of th@ PaDre3, without him it ~ not be. I am ~ gretefu| tO Yeeutomo Fukumochi for he~p(ul commcmUI On I dran end to Michael Hldlldey, who h~ mecle clear to m@ rmmy sylRemz¢ i:~n¢iOl~ end In=Ught~ N L ~i i i i ::i ::i i i i ĩi i !i i ĩ::~:::.::ĩi ĩi ĩ:.:::.:::.i :.ĩ General Specific our general conceptual organization of the world around us and our own inner world; it is the linguistic part o! conceptuals. For the lexicon this means that lexical semantics is that part of conceptuals which has become laxicalized and thus enters into the structure of the vocabulary.There is also a correlation between conceptual organization and the organization of part of the grammar.The double arrow between the two boxes represents the mapping (realization or encoding) of semantics into syntax. For example, the concept SELL is mapped onto the verb sold?The grammar is the general Dart of the syntactic box, the part concerned with syntactic structures. The /exicon CUts across three levels: it has a semantic part, a syntactic part (isxis) and an orthographic part (or spelling; not present in the figure)? The lexicon 21 •m ul~ng the genec=l convention of cagitllizing terms clattering semantic entree=. C.~tak= will also i~l ueBd fo¢ rom~ aJmocieteo with conce~13 (like AGENT. RECIPIENT lu~ OI~ECT~ and for gcamm~ktical functions (like ACTOR. BENEFICIARY and GOAL). These notions will be introduced below.3This me~m= that an ~ fo¢ a lexical item ¢on~L~ts of three sureties...4¢ i eBmlmtic wltry, • syrltacti¢ entry anti an orttlogrlkOhi¢ ontry. The lexicon box ~ ~howtt •~ containing g4e~l Of ~ syntax and secmlntic=l in the figt~te (ttiQ s~l~ area) to ern~lBize t~ nal~re of the isxicaJ entry, consists entirely of independent lexical entries, each representing one lexicai item (t'ypicaJly a word). This figure, then, represents the i~art of the PENMAN text production system that includes the grammar, the lexicon and their immediate environment.PENMAN is at the design stage; conse¢lUantiy the discussinn that follows is tentative end exploratory rather than definitive. --The ¢om!=onant that has advanced the farthest is the grammar. It has been implemented in NIGEL, the santo nee generator mentioned above. It has been tested and is currently being revised and extended. None of the other components (those demarcated by continuous lines) have been implemented; they have been tested only by way of hand examples. This groat will concentrate on the design features of the grammar rather than on the results of the implementation and testing of it.One of the fundamental properties of the KL-ONE like knowledge representation (KR) is its intensional --extensional distinction, the distinction between a general conceptual taxonomy and a second part of the representation where we find individuals which can exist, states of affairs which may be true etc. This is roughly a disbnction t:~ltween what is conceptuaiizaDle and actual conceptualizations (whether they are real or hypothetical). In the overview figure in section 1, the two are together called conceptuals.For instance, to use an example I will be using throughout this paper, there is an inteflsional concept SELL, about which no existence'D or location in time is claimed. An intenalonal concept is related to extensional concede by the relation Inclividuates: intenaionai SELL is related by individual instances of extensional SELLs by the Individuates relation. If I know that Joan sold Arthur ice-cream in the I~!rk, I have s SELL fixed in time which is part of an assertion about Joan and it Indiviluates intenaional SELL. 4 A concept has internal structure: it is a configuration of roles. The concept SELL has an internal ~re which is the three roles associated with it, viz. AGENT (the seller), RECIPIENT (the buyer) and OBJECT. These rolee are slot3 which are filled by other concepts and the domains over which these can very are defined as value restrictions. The AGENT of SELL is a PERSON or a FRANCHISE and sO on. tn ~,ther words, a ¢oncel~t is defined by its relation to other concepts (much aS in European structuraiism). These relations are roles a'~sociated with the concept, roles whose fillers are other concept¢ This gives rise to a large conceptual net.There is another reiation which helps define the place of a conoe=t in the conceptual net. viz. SuperCategory, which gives the conceptual net a taxonomic (or hierarchic) structure in addition to the structure defined by the role relations. A semantic entry, than, is a concept in the conceptuais-For sold, we find soil wiffi its associated roles, AGENT, RECIPIENT and OBJECT.The right ~ of figure 4.1 below (marked "se:'; after a figure from [1] gives a more detailed semantic ent~ for sold: = pointer identifies the relevant part in the KR, the concept that constitutes the semantic entry (here the concept SELL).The concept that constitutes the semantic entry of a lexicai item has a fairly rich structure. Roles are associated "with the concept and the modailty (neces~ury or optional), the ¢ardinaii~ of and restrictions on (value of) the fillers are given. Both the Sul~H'Category relation and the Indiviluates relation provide ways of walking around in the KR to find expresmons for concepts. If we are in the extensional part of the KR, looking at a particular individual, w~ can follow the Individuates link up to an intensional concept. There may be a word for it, in which case the concept is part of a laxical entry. If there is no word for the concept, we will have to consider the various options the grammar gives us for forming an ¢oPropriate exoressJon.The general assumption is that all the intensional vocabulary can he used for extensional concepts in the way just describe(l: exc)reasabi..,'y is inherited with the Individuates relation.Expression candidates for concepts can also be located along the SuberCate(Jory link by going from one concept to another one higher up in the taxonomy. Consider the following example: Joan sold Arthur ice.cream. The transaction took place in tl~e perk. The SuperCate~ory link enables us to go from SELL to TRANSACTION, where we find the expression transaction.The structure of the vocabulary is parasitic on the conceptual structure.In other words, laxicalized concepts are related not only to one another, but also to concepts for which there is no word,encoding in English (i.e. non-laxicalized concepts).Crudely, the semantic structure of the lexicon can be described as being part of the hierarchy of intensional concepts --the intensional concepts that happen to be lexicalized in English. --The structure of English vocabulary is thus not the only principle that is reflected in the knowledge representation, but it is reflected. Very general concepts like OBJECT, THING and ACTION are at the top. In this hierarchy, roles are inherited. This corresponds to the semantic redundancy rules of a lexicon.Considering the possibility of walking around in the KR and the integration of texicalized and non.iexicalized concepts, the KR suggests itself as the natural place to state certain text-forming principles, some of which have been described under the terms lexical cohesion ([8] ) and Thematic Progression ( [6] ).I will now turn to the syntactic component in figure 1-1, starting with a brief introduction to the framework (Systemic Linguistics) that does the same for that component as the notion of semantic net did for the component just discussed. The systemic tradition recognizes a fundamental principle in the organization of language: the distinction between cl~oice and the structures that express (realize) choices. Choice is taken as primary and is given special recC,;]nition in the formalization of the systemic model of language. Consequently, a description is a specification of the choices a speaker can make together with statement:; about how he realizes a selection he has made. This realization of a set of choices is typically linear, e.g. a string of words. Each choice point is formalized as a ,system (hence the name Systemic). The options open to the speaker are two or more features that constitute alternatives which can' be chosen. The preconditions for the choice are entry conciitiona to the system. Entry conditions are logical expressions whose elementary terms are features.All but one of the systems have non.emt~/ entry conditions. This causes an interdependency among the systems with the result that the grammar of English forms one network of systems, which cluster when a feature in one system is (part of) the entry condition to another system. This dependency gives the network depth: it starts (at its "root") with very general choices. Other systems of choice depend on them (i.e. have a feature from one of these systems --or st combination of features from more than one system .. as entry conditions) so that the systems of choice become less general (more delicate to use the, systemic term) as we move along in the network.The network of systems is where the control of the grammar resides, its non.deterministic part. Systemic grammar thus contrasts with many other formalisms in that choice is given explicit representation and is captured in a single ruis type (systems), not distributed over the grammar as e.g. optional rules of different types. This property of systemic grammar makes it s very useful component in a text-production system, seDecially in the interf3ce with semantics and in ensuring accessibility of alternatives.The rest of the grammar is deterministic .. the consequences of features chosen in the network of systems. These conse(luences are formalized as feature realization statements whose task is to build the appropriate structure.For example, in independent indicative sentences, English offers a choice between declarative and interroaative sentences, if interrooativ~ is chosen, this leeds to a dependent system with a choice between wh-intsrrooative and ves/no-interroaative. When the latter is chosen, it is realized by having ~.he FINITE verb before the SUBJECT.Since it is the general design of the grammar that is the focus of attention, I will not go through the algorithm for generating a sentence as it has been implemented in NIGEL. The general observation is that the results are very encouraging, although it is incomplete. The algorithm generates a wide range of English structures correctly. There have not been any serious problems in implementing a grammar written in the systemic notation. The structure consists of three layers of function symbols, aJl of which are needed to get the result desired... The structure is not only functional (with-function s/m/ools laloeling the const|tuents instead of category names like Noun Phrase and Verb Phrase) but it is multifunctional.Each layer of function symbols shows a particular perspective on the clause structure. Layer [1] gives the aspect of the sentence as a representation of our experience. The second layer structures the sentence as interaction between the speaker and the hearer;, the fact that SUBJECT precedes FINITE signals that.the speaker is giving the hearer information. Layer [3] represents a structuring of the clause as a message; the THEME is its starting point. The functions are called experiential, inte~emonal and textual resm~-~Jvety in the systemic framework: the function symbols are said to belong to three different metafunctions, in the rest of the !~koar I will concentrate on the experiential metafunction, I=artiy because it will turn out to be highly relevant to the lexicon.The syntactic sut3entry.In the systemic tradition, the syntactic part of the lexicon is seen as a continuation of grammar (hence the term lexicogrammar for both of them): lsxical choices are simply more detailed (delicate) than grammatical choices (cf.[9]). The vocabulary of English can be seen as one huge taxonomy, with Roget's Thesaurus as a very rough model.A taxonomic organization of the relevant Dart of the vocabulary of English is intended for PENMAN, but this Organization is part of the conceptual organization mentioned al0ove. There is st present no separate lexicai taxonomy.The syntactic subentry potentially con~sts of two parts. There is alv~ye the class specification .. the lexical features. This is a statement of the grammatical potential of the lexicai item, i.e. of how it can be used grammatically. For sold the'ctas,~ specification is the following:verb C'/I1~ |0 c~als 02 bemlf &ct, 1rewhere "benefactive" says that sold can occur in a sentence with a BENEFICIARY, "class 10" that it encodes a material pr~ (contrasting with mental, varbai and relational processes) and "CMas 02" that it is a tnmaltive verb.In ~ldition, there is a provision for a configurationai part, which is a h'agment of a Structure the grammar can generate, more specifically the experiential part of the grammar, s The structure corresponds to the top layer ( # [1]) in the example above. In reference to this example, I can make more explicit wh~ I mean by fragment. The general point is that (to take just one cimm as an example) the presence and cflara~er of functions like ACTOR, BENEFICIARY and GOAL .-diract t:~'ticiplmts in the event denoted by the verb .-depend on the type of verb, whereas the more circumstantial functions like LOCATION remain unaffected and a~oDlical=ie to all ~ of verb. Conse(luently, the information about the poasibilib/ of having a LOCATION constituent is not the type of information that has to be stated for specific lsxical items. The information given for them concerns only a fragment of the experiential functional structure. This says that sold Can occur in a fragment of a struCtUre where it is PROCESS and there can be an ACTOR, a GOAL and a RENEF1CIARY.The usefulness of the structure fragment will be demonstrated in section 4.
the problem:
I will now turn to the fundamental proiolem of making a working s/stem out of the parts that have been discu~md.The problem ~ two parts to it. viz.1. the design of the system as a system with int.egrated Darts and 2. the implementation of the system. I will only be concerned with the 6rat aspect here.The components of the system have been presented. What remains -. and that is the problem --is to dealgn the misalng [inks; tO find the strategies that will do the job of connecting the components.Finding these strategies is a design problem in the following sense. The stnUegies do not come as accessories with the frameworks we have uasd (the systemic framework and the KL-ONE inspired knowledge reprasentatJon). Moreover, th~me two frameworks stem from two quite dispm'ate traditions with different sets of goals, symbols and terms.I will state the problem for the grammar first and then for the lexicon. As it has been presented, the grammar runs wik:l and free. It is organized Mound choice, to be sure, but there is nothing to relate the choices to the rest of the Wstem, in particular to what we can take to be semantics. The lexicon incorporates the problem of finding an ¢opropriate strategy to link the components to each other, since it cuts acrosa component boundn,des. The semantic and s/ntsctic subpaJts of a lexica| entry have been outlined, but nothing hall been sak:l about how they should be matched up with one ,.,nother. The reason why this match is not ~rfectly straightforward has to do with the fact that both entries may be sa'uctunm (conf,~urations) rather than s~ngle elements. In sedition, there are lexical relations that have not been accounted for yet, es~lcially synonymy and polysemy. The control of the grammar resides in the n.etwork of systems. Choice experts can be developed to handle the choices in these systems.The idea is that there is an expert for each system in the network and that this expert knows what it takes to make a meaningful choice, what the factors influencing its choice are. it has at its disposal a table which tells it how to find the relevant pieces of information, which are somewhere in the knowledge domain, the text plan or the reader model.In other words, the part of the grammar that is related to Semantics is the part where the notion of choice is: the choice experts know about the Semantic consequences of the various choices in the grammar and do the job of relating syntcx tO semantics, sThe recognition of different functional componenta of the grammar relates to the multi-funCtional character of a structure in systemic grimmer I mentioned in relsUon to the example In the park Joan sold Arthur ice.cream in section 2.2. The organization of the sentence into PROCESS, ACTOR, BENEFICIARY, GOAL, and LOCATIVE is an organization the grammar impeses on our experience, and it is the aspect of the organization of the Sentence that relates to the conceptual organization of the knowledge domain: it is in terms of this organization (and not e.g. SUBJECT, OBJECT, THEME and NEW INFORMATION) that the mapping between syntax and semlmtic,,i can be stated... The functional diver~ty Hailiday has provided for systemic grammar is useful in a text.production .slrstam; the other functJone find uses which space does note permit a discuesion of here.Pointers from cJonslituents.In order for the choice experts to be able to work, they must know where to look. Resume that we are working on in the park in our example Sentence in the park Joan sold Arthur ice.cream and that an expert has to decide whether park should be definite or not. The information about the status in the mind of the reader of the concept corre~oonding to park in this sentence is located at this conce~t: the ~ck is to ~mociats the concept with the constituent being built. In the example structure given earlier, in the park is both LOCATION and THEME, only the former of which is relevant to the present problem. The solution is to set a pointer to the relevant extensional concept when the function symbol LOCATION is inserted, so that LOCATION will carry the pointer and thus make the information attached to the concept 8ccaesible.
the lexicon and the lexlcal entry:
I have already inb-oducad the semantic subentry and the syntactic • ubentry. They are stated in a KL-ONE like representation and a systemic notation respec~vely. The queslion now is how to relate the two.In the knowledge representation the internal struc~Jre of a concept is a configuration of roles and these roles lead to new concepts to which the concept is related. A syntactic structure is seen as a configuration of / function symbols; syntactic categories serve these functions --in the generation of a structure the functions lead to an entry of a part of the network. For example, the function ACTOR leads to a part of the network whoSe entry feature is Nominal Group just ~s the role AGENT (of SELL) leads to the concept that is the filler of it. The parallel between the two representations in this area are the following: where the previously discussed semantic and syntactic subentries are repeated and paired off against each other.This full lexical entry makes clear the usefulness of the second part of the syntactic entry .. the fragment of the experiential functional structure in which sold can be the PROCESS.Another piece of the total picture siso falls into place now. The notion of a pointer from an experiential function like BENEFICIARY in the grammatical structure to a point in the conceptual net was introduced above. We can now see how this pointer may be Set for individual lexical items: it is introduced as a simple relation between a grammatical function symbol and s conceptual role in the iexical entry of e.g. SELL.Since there is an Indlviduates link between this intensionai concept and any extensional SELL the extensional concept that is part of the particular proposition that is being encoded grarnmaticaJly, the pointer is inherited and will point to a role in the extensional part of the knowledge domain.At this point, I will refer again to the figure below, whose dght half I have already referred to as a full example of a semantic subentry ("see")."sp:" is the spelling or orthographi c subentry; "gee" is the syntactic s,,bentry.We have two configurations in the lexical ent~'y: in the Semantic subentry the concept plus a number of roles and in the syntactic subentry a number of grammaticsi functions. The match is represented in the.f_i~ure abov e by the arrows. FIgure 4-1: Lexical entry for sold in the first step I introduced the KL-ONE like knowledge representation All three roles of SELL have the modaJity "r~c~___,~_~'. This does not dictate the grammatical pos.~bilities. The grammar in Nigei offers a choice between e.g. They sold many books to their customers and The book sold well, In the second example, the grammar only Dicks out a subset of the roles of SELL for expras~on. In other words, the grammar makes the adoption of different persl~¢tives possible. II I can now return to the ol:~ervation that the functional diversity Hallidey has provldat for systemic grammar is useful for our pu~__o' -' e~-__; The fact that grammatical structure is multi.layered means that those aspects of grammatical structure that are relevant to the mapping between the two lexical entries are identified, made explicit (as ACTOR BENEFICIARY etc.) and kept seperate from pdnciplas of grernmatical structuring that are not directly relevant to this mapl:dng (e.g. SUBJECT, NEW and THEME).In conclusion, a stretegy for accounting for synonymy and polysemy can be mentioned.The way to cagture synonymy is to allow a concept to be the semantic subentry for two distinct orthographic entries. If the items are syntactically identical as well. they will also share a syntactic subentry.Polyeemy works the other way:. there may be more than one concept for the same syntactic subentry.
conclusion:
I have discus.s~l a gremmm" and a lexicon for PENMAN in two steps. F~rst I looked at them a~ independent components --the semantic entry, the grammar and the syntactic entry --and then, after identifying the problems of integrating them into a system, I tumed to strategies for re!sting the grammar to the conceptual representation and the syntactic entry to the semantic one within the lexicon. and the systemic notation and indicated how their design features can be Out to good use in PENMAN. For instance, the distinction between intension and exten*on in the knowledge representation makes it I~OS.~ble to let iexical semantic~ be part of the conceptuals. It was also suggested that the relations SuberC.,at~gory and Indivlduates can be to find expre~-~ions for a particular concept.The second steO attempted to connect the grammar to semantics through the notion of the choice expel, making use of a design principle of systemic grammars where the notion of choice is taken as ba~c. I pointed out the correlation between the structure of a concept and the notion of structure in the systemic framework and allowed how the two can be matched in a lexical entry and in the generation of a sentence, a slrstegy that could be adopted because of the multl.funotional nature of structure in systemic grammars. This second step has been at the same time an attempt to start exploring the potential of a combination of a KL-ONE like representation and a Sy~emic Grammar. Although many ~%oects have had to be left out of the discussion, there are s number of issues that are of linguistic interest and significance.The most basic one is perhal~ the task itself:, designing • model where a grammar and a lexicon can actually be mate to function as more than just structure generators. One issue reiatat to this that has been brought uD was that different ~ external to the grammar find resonance in different I=ari~ of the grammar and that there is a partial correlation between tim conceptual structure of the knowleclge reOresentation and the grammar and lexicon.AS was empha.~zacl in the introduction, PENMAN is at the design stage: there is a working sentence generator, but the other 8.qDect~ of what has been di$cut~tecl have not been imDlement~l and there is no commitment yet to a frozen design. Naturally, a large number of problems still await their solution, even at the level of design and, cleerly, many of them will have to wait. For example, selectivity among terms, beyond referential acle¢luacy, is not adclressecl. In general, while noting correlations between linguistic organization and conceptual organization, we do not want the relation tO be deterministic: part of being a good varbaiizar is being able to adopt different viewpoints --verbalize the same knowledge in different ways. This is clearly an ares for future research. Hopefully, ideas such as grammars organized around choice and cl~oice experts will ;)rove useful tools in working out extensions.
Appendix:
| null | null | null | null | {
"paperhash": [
"moore|a_snapshot_of_kds._a_knowledge_delivery_system",
"brachman|a_structural_paradigm_for_representing_knowledge.",
"mann|computer_generation_of_multiparagraph_english_text",
"mann|computer_as_author_--_results_and_prospects.",
"halliday|cohesion_in_english"
],
"title": [
"A Snapshot of KDS. A Knowledge Delivery System",
"A Structural Paradigm for Representing Knowledge.",
"Computer Generation of Multiparagraph English Text",
"Computer as Author -- Results and Prospects.",
"Cohesion in English"
],
"abstract": [
"SUMMARY KDS Is a computer program which creates multl-par~raph, Natural Language text from a computer representation of knowledge to be delivered. We have addressed a number of Issues not previously encountered In the generation of Natural Language st the multi-sentence level, vlz: ordering among sentences and the scope of each, quality comparisons between alternative 8~regations of sub-sententJal units, the coordination of communication",
"Abstract : This report presents on associative network formalism for representing conceptual knowledge. While many similar formalisms have been developed since the introduction of the semantic network in 1966, they have often suffered from inconsistent interpretation of their links, lack of appropriate structure in their nodes, and general expressive inadequacy. In this paper, we take a detailed look at the history of these semantic nets and begin to understand their inadequacies by examining closely what their representational pieces have been intended to model. Based on this analysis, a new type of network is presented - the Structured Inheritance Network (SI-NET) - designed to circumvent common expressive shortcomings.",
"This paper reports recent research into methods for creating natural language text. A new processing paradigm called Fragment-and-Compose has been created and an experimental system implemented in it. The knowledge to be expressed in text is first divided into small propositional units, which are then composed into appropriate combinations and converted into text.KDS (Knowledge Delivery System), which embodies this paradigm, has distinct parts devoted to creation of the propositional units, to organization of the text, to prevention of excess redundancy, to creation of combinations of units, to evaluation of these combinations as potential sentences, to selection of the best among competing combinations, and to creation of the final text. The Fragment-and-Compose paradigm and the computational methods of KDS are described.",
"Abstract : For a computer program to be able to compose text is interesting both intellectually and practically. Artificial Intelligence research has only recently begun to address the task of creating coherent texts containing more than one sentence. One recent research has produced a new paradigm for organizing and expressing information in text. This paradigm, called Fragment-and-Compose, has been used in a pilot project to create texts from semantic nets. The method involves dividing the given body of information into many small propositional units, and then combining these units into smooth coherent text. So far the largest example written by Fragment-and-Compose has been two paragraphs of instruction about what a computer operator should do in case of indications of a fire. This report describes the text generation problem and anticipates a specific way to disseminate and use technical developments. It presents the research that led to creation of Fragment-and-Compose, including the largest example of computer-produced text. It also discusses the immediate problems and difficulties of elaborating Fragment-and-Compose into a general and powerful method. (Author)",
"Cohesion in English is concerned with a relatively neglected part of the linguistic system: its resources for text construction, the range of meanings that are speciffically associated with relating what is being spoken or written to its semantic environment. A principal component of these resources is 'cohesion'. This book studies the cohesion that arises from semantic relations between sentences. Reference from one to the other, repetition of word meanings, the conjunctive force of but, so, then and the like are considered. Further, it describes a method for analysing and coding sentences, which is applied to specimen texts."
],
"authors": [
{
"name": [
"James A. Moore",
"W. Mann"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"R. Brachman"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"W. Mann",
"James A. Moore"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"W. Mann",
"James A. Moore"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"M. Halliday",
"R. Hasan"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
}
],
"arxiv_id": [
null,
null,
null,
null,
null
],
"s2_corpus_id": [
"41559124",
"58814991",
"112842",
"60897235",
"62192469"
],
"intents": [
[],
[],
[],
[],
[]
],
"isInfluential": [
false,
false,
false,
false,
false
]
} | Problem: Integrating the grammar and lexicon components in a text production system like PENMAN poses challenges due to the need to relate syntactic and semantic information effectively.
Solution: By utilizing a systemic grammar approach and a KL-ONE like knowledge representation, strategies can be developed to connect the grammar to semantics and the syntactic entry to the semantic one within the lexicon. This integration will enhance the functionality of the system by ensuring coherence between the grammar and lexicon components. | 524 | 0.045802 | null | null | null | null | null | null | null | null |
8279fe474e25ee954d71f34f310a6dfdc09e5438 | 11001077 | null | A View of Parsing | The questions before this panel presuppose a distinction between parsing and interpretation. There are two other simple and obvious distinctions that I think are necessary for a reasonable discussion of the issues. First, we must clearly distinguish between the static specification of a process and its dynamic execution. Second, we must clearly distinguish two purposes that a natural language processing system might serve: one legitimate goal of a system is to perform some practical ~sk efficiently and well. while a second goal is to assist in developing a scientific understanding of the cognitive operations that underlie human language processing. 1 will refer to pa~rs primarily oriented towards the former goal as Practical Parsers (PP) and refer to the others as Performance Model Parsers (PMP). With these distinctions in mind. let me now turn to the questions at hand. 1. The Computational Perspective. From a computadonal point of view. there are obvious reasons for distinguishing parsing from interpretation. Parsing is the process whereby linearly ordered scquences of character strings annotated with information found in a stored lexicon are transduced into labelled hierarchical structures. Interpretation maps such structures either into structures with different formal properties, such as logical formulas, or into sequences of actions to be performed on a logical model or database. On the face of it, unless we ignore the obvious formal differences between string--to--structure and structure--to--structure mappings, parsing is thus formally and conceptually distinct from interpretation. The specifications of thc two processes necessarily mention different kinds of operations that are sensitive to different-features of the input and express quite different generalizations about the correspondences betwecn form and meaning. | {
"name": [
"Kaplan, Ronald M."
],
"affiliation": [
null
]
} | null | null | 19th Annual Meeting of the Association for Computational Linguistics | 1981-06-01 | 3 | 2 | null | From a computadonal point of view. there are obvious reasons for distinguishing parsing from interpretation. Parsing is the process whereby linearly ordered scquences of character strings annotated with information found in a stored lexicon are transduced into labelled hierarchical structures. Interpretation maps such structures either into structures with different formal properties, such as logical formulas, or into sequences of actions to be performed on a logical model or database. On the face of it, unless we ignore the obvious formal differences between string--to--structure and structure--to--structure mappings, parsing is thus formally and conceptually distinct from interpretation.The specifications of thc two processes necessarily mention different kinds of operations that are sensitive to different-features of the input and express quite different generalizations about the correspondences betwecn form and meaning.As far as I can see. these are simply factual assertions about which there can be little or no debate. Beyond this level, however, there are a number of controversial issues. Even though parsing and interpretation operations are recognizably distinct, they can be combined in a variety of ways to construct a natural language understanding system.For example, the static specification of a s~stem could freely intermix parsing and interpretation operations, so that there is no part of the program text that is clearly identifiable as the parser or interpreter, and perhaps no part that can even be thought of as more pa~er-like or interpreter-like than any other. Although the microscopic operations fall into two classes, there is no notion in such a system of separate parsing and interpretation components at a macroscopic te~cl. .Macroscopiealty. it might be argued` a ,~yslcm specified in this way does not embody a parsmg/interprcmtitm distinctmn.On the other hand. we can imagine a system whose static specification is carefully divided into two parts, one that only specifies parsing operations and expresses parsing generalizations and one that involves only interpretation specifications. And there arc clearly untold numbers of system configurations that fall somewhere between these extremes. I take it to be uncontrovcrsial that. other things being equal, a homogenized system is less preferable on both practical and scientific grounds to one that naturally decomposes. Practically. such a system is easier to build and maintain, since the parts can be designed, developed, and understood to a certain extent in isolation, perhaps even by people working independently. Scientifically. a decomposable system is much more likely to provide insight into the process of natural language eomprehe~ion, whether by machines or people. The reasons for this can be found in Simon's classic essay on the Architecture of Complexity. and in other places as well.The debate arises from the contention that there are important "other things" that cannot be made equal, given a completely decomposed static specification. In particular, it is suggested that parsing and interpretation operations must be partially or totally interleaved during the execuuon of a comprehension process. For practical systems, arguments are advanced that a "habitable" system, one that human clients fecl comfortable using, must be able to interpret inputs before enough information is available for a complete syntactic structure or when the syntactic information that is available does not lead to a consistent parse. It is also argued that interpretation must be performed in the middle of parsing in the interests of reasonable efficiency: the interpreter can reject sub-constituents that are semantically or pragmatically unacceptable and thereby permit early truncation of long paths of syntactic computation. From the performance model perspective, it is suggested that humans seem able to make syntactic, semantic, and pragmatic decisions in parallel, and the ability to simulate this capability is thus a condition of adequacy for any psycholinguistic model. All these arguments favor a system where the operations of parsing and interpretation are interleaved during dynamic execution, and perhaps even executed on parallel hardware (or wetware, from the PMP perspective), If parsing and interpretation are run-time indistinguishable, it is claimed, then parsing and interpretation must be part and parcel of the same monolithic process.Of course, whether or not there is dynamic fusit)n of parsing and interpetation is an empirical question which might be answered differently for practical systems than for perlbrmance models, and might even be answered differently ior different practical implementations. Depending on the relative computational efficiency of parsing versus interpretation operations, dynamic intcrlc:ning might increase or decrease ovendl system efli:'ctivcness. For example, in our work t.n the I.UNAR system /Woods. Kaolan. & Nash-Webbcr. 1q72), we fl)tmd it more ellicient to detbr semantic prt~.cssmg until after a complete, well-l~.,nncd parse had been discovered. The consistency checks embedded in the grammar could rule out syntactically unacceptable structures much more quickly than our particular interpretation component was able to do. More recendy. Martin. Church. and Ramesh (1981) have claimed that overall efficiency is greatest if all syntactic analyses are computed in breadth-fi~t fashion before any semantic operations are executed. These results might be taken to indicate that the particular semantic components were poorly conceived and implemented, with little bearing on systems where interpretation is done "properly" (or parsing is done improperly). But they do make the point that a practical decision on the dynamic fusion of parsing and interpretation cannot be made a priori, without a detailed study of the many other factors that can influence a system's computational resource demands.Whatever conclusion we arrive at from practical considerations, there is no reason to believe that it will carry over to performance modelling. The human language faculty is an evolutiol, try compromise between the requirements that language be easy to learn, easy to produce, and easy to comprehend. Because of this. our cognitive mechanisms for comprehension may exhibit acceptable but not optimal efficiency, and we would therefore expect a successful PMP to operate with psychologically appropriate inefficiencies.Thus. for performance modelling, the question can be answered only by finding eases where the various hypotheses make crucially distinct predictions concerning human capabilities, errors, or profiles of cognitive load. and then testing these predictions in a careful series of psycholinguisttc experiments. It is often debated, usually by non-linguists, whether the recta-linguistic intuitions that form the empirical foundation for much of current linguistic theory are reliable indicators of the naUve speaker's underlying competence. When it comes to questions about internal processing as opposed to structural relations, the psychological literature has demonstrated many times that intuitions are deserving of even much less trust. Thus, though we may have strong beliefs to the effect that parsing and interpretation are psychologically inseparable, our theoretical commitments should rather be based on a solid experimental footing. At this point in time. the experimental evidence is mixed: semantic and syntactic processes are interleaved on-line in many situations, but there is also evidence that these processes have a separate, relatively non-interacting run-time coup.However, no matter how the question of. dynamic fusion is ultimately resolved, it should bc clear t, ha[ dynamic interleaving or parallelism carries no implicauon of" static homogeneity. A system whose run-rune behavior has no distinguishable components may neverthelc~ have a totally dccompo~d static description. Given this possibilty, and given me evident scientific advantages that a dccornposed static spccifgation aflords. I have adopted in my own rescareh on these matters the strong working hypothesis that a statically deeomposahle sys~n co~ be constructed to provide the necessary efficiencics for practical purposes and ycL perhaps with minor modirr.ations and l'twther ~ipulations. Still supp(~n signilicant explanauons of. p~ycholingmstic phenomena.In short, I maintain the position that the "true" comprehension system will also meet our pre-theorctic notions of. scientific elegance and "beauty'. This hypothesis, that truth and beauty are highly correlated in this domain, is perhaps implausible, but it presents a challenge for theory and implementation that has held my interest and fascination for many years. | null | null | The questions before this panel presuppose a distinction between parsing and interpretation. There are two other simple and obvious distinctions that I think are necessary for a reasonable discussion of the issues. First, we must clearly distinguish between the static specification of a process and its dynamic execution. Second, we must clearly distinguish two purposes that a natural language processing system might serve: one legitimate goal of a system is to perform some practical ~sk efficiently and well. while a second goal is to assist in developing a scientific understanding of the cognitive operations that underlie human language processing. 1 will refer to pa~rs primarily oriented towards the former goal as Practical Parsers (PP) and refer to the others as Performance Model Parsers (PMP). With these distinctions in mind. let me now turn to the questions at hand. | While k is certainly Irue that our tools (computers and formal grammars) have shoged our views of" what human languages and human language preceding may be like, it seems a little bit strange to think that our views have been warped by those tools. Warping suggcsts, that there is rome other, more accurate view that we would have comc m either without mathematical or computational tools or with a set of formal tools with a substantially different character. There is no way in principle to exclude such a possibility, but it could hc tatar we have the tools wc have because they harmonize with the capabilities of the human mind for scientific understanding. That is. athough substantially different tools might be better suited to the phenomena under investigation, the results cleaved with [hose tools might not be humanly appreciable. "]'he views that have emerged from using our present tools might be far off the mark, but they might be the only views [hat we are c~hle OC Perhaps a more interesting statement can be made if the question is interpreted as posing a conflict between the views that we as computational linguists have come to. guided by our present practical and formal understanding of what constitutes a reasonable computation, and the views that [henretical linguisXs, philosophers, and others similarly unconstrained by concrete computation, might hold. Historically. computational Brammm~ have represented a mixture of intuitions about the significant gntctural generalizations of language and intuitions about what can be p~ efT~:ientiy, given a pani-'ular implementation that the grammar writer had in the back of his or her mind. This is certainly [rue of my own work on some of the catty ATN grammars. Along with many others, I felt an often unconscious pressure to move forward along • given computational path as long as possible before throwing my gramnmtical fate to the purser's general nondeterntioLs~ c~oice mechanisms, even though [his usually meant that feaster contents had to be manipulated in linguistically unjustified ways. For example, the standard ATN account of" passive sentcnces used register operations to •void backtracking that would re.analyze the NP that was initially parsed as an active subject. However. in so doing, the grammar confused the notions of surfare and deep suh)eets, and lost the ability to express gcnendizations concerning, for examplc, passive tag questions.In hindsighL I con~der that my early views were "warped" by both the ATN formalism, with its powerful register operations, and my understanding of the particular top-down, le•right underlying pa~ing algorithm. As [ developed the more sophisticated model of parsing embodied in my General Syntactic Processor, l realized that [here was a systematic, non-fpamrr~*_~*~J way at" holding on to funcXionally mis-assigned constituent structures. Freed from worrying about exponential constituent su'ucture nondetermism, it became possible to restrict and simplify [he ATN's register oparaUons and, ultimately, to give them a non-proceduraL algebraic interpretation. The result is a new grammatical formalism, Lexical-Functiona] Grammar CKaplan & Bresnan, in press), a forrnalisan that admits a wider class of eff¢ient computational implementations than the ATN formalism just becat~ she grammar itself" makes fewer computational commi~nen~ Moreover, it is a 104 formalism that provides for the natural statement of" many language particular and universal gencralizations, h also seems to bc a formalism d'mt fatal/tales cooperaoon between linguists and computational linguists, despite the.~" diffcnng theoretical and me[hodologeaI bmses.Just as we have been warped by our computational mechanisms, linguists have been warped by their formal tools, particularly the r~ansformational formalism.The convergence represented by Lexical-Functional Grammar is heartening in that it suggests that imperfect tools and understanding can and will evolve into better tools and deeper insights.As indicated •hove, I think computational grammars have been influenced by the algorithms that we expect to appb them with. While difficult w weed out, that influence is not a thcoretica] or practical oeces~ty. By reducing and eliminaong the computational commitments of Our grammaocal forn~ism, as we have done with Lexical-Functional Grammar, it is possible to devise a variety or different parsing schemes. By comparing and coou'asUng their behavior with different grammars and sentences, we can begin to develop a deeper understanding of [he way compulationa] resources depend on properties of grammars, smngs, and algorithms. This unders~nding is essenUal both to practic~ implementations and also to psycholinguistic modelling. Furthermore, if a formalism allows grammars to be written as an abstract characterization of string--structure correspondences, the Jp~nunm" should be indifferent as to recognition or generation. We should be •hie to implement fcasible generators as well as parsers, and again, shed light on the interdependencies of grammars and grammaucal prrx:cssmg, . Lc( me conclude with a few comments about the psychol,ogeaI validity or grammars and parsing algorithms. To the extent that a grammar cor~j.ly models a native speaker's lingtusuc compelcnce, or, less tend~Uously, the set of meta-linguistic judgments he is able to make. then ti'mt srammar has a certain psyehok~gical "validity'. h becomes much more interepang, however, if" it can •l~.J be cmpeddcd in a psychologeally accurate motel of speaking and comprehending, h.~ all cumpct¢,nce grammars will mcc~ [his additional requL,~ment, but I have the optLmis~c belief that such a grammar will ~y be found.It is also possible to find psychological validation for a parsing algorithm in the •bsence of a particular Ipmnn~. One could in principle adduce evidence to [he effect that [he architecture of [he parser, the structuring of its memory and operations, corresponds point by point to well-e,.,.,.,.,.,.,.,.,~mhl~hed cognitive mectmnisms. As • research strategy for •fraying at a psychologically valid model of comprehension, it is much more reasonable to develop linguisr.ically justified 8rammars and computationaUy motivated pmT, ing algorithms in a collaborative effort. A model with such independently motivated yet mutually compatible knowledBe and process components is much more likely to resuh in an explanatory account of [he mechanisms underlying human linguisl~ abilil~=. | Main paper:
the computational perspective.:
From a computadonal point of view. there are obvious reasons for distinguishing parsing from interpretation. Parsing is the process whereby linearly ordered scquences of character strings annotated with information found in a stored lexicon are transduced into labelled hierarchical structures. Interpretation maps such structures either into structures with different formal properties, such as logical formulas, or into sequences of actions to be performed on a logical model or database. On the face of it, unless we ignore the obvious formal differences between string--to--structure and structure--to--structure mappings, parsing is thus formally and conceptually distinct from interpretation.The specifications of thc two processes necessarily mention different kinds of operations that are sensitive to different-features of the input and express quite different generalizations about the correspondences betwecn form and meaning.As far as I can see. these are simply factual assertions about which there can be little or no debate. Beyond this level, however, there are a number of controversial issues. Even though parsing and interpretation operations are recognizably distinct, they can be combined in a variety of ways to construct a natural language understanding system.For example, the static specification of a s~stem could freely intermix parsing and interpretation operations, so that there is no part of the program text that is clearly identifiable as the parser or interpreter, and perhaps no part that can even be thought of as more pa~er-like or interpreter-like than any other. Although the microscopic operations fall into two classes, there is no notion in such a system of separate parsing and interpretation components at a macroscopic te~cl. .Macroscopiealty. it might be argued` a ,~yslcm specified in this way does not embody a parsmg/interprcmtitm distinctmn.On the other hand. we can imagine a system whose static specification is carefully divided into two parts, one that only specifies parsing operations and expresses parsing generalizations and one that involves only interpretation specifications. And there arc clearly untold numbers of system configurations that fall somewhere between these extremes. I take it to be uncontrovcrsial that. other things being equal, a homogenized system is less preferable on both practical and scientific grounds to one that naturally decomposes. Practically. such a system is easier to build and maintain, since the parts can be designed, developed, and understood to a certain extent in isolation, perhaps even by people working independently. Scientifically. a decomposable system is much more likely to provide insight into the process of natural language eomprehe~ion, whether by machines or people. The reasons for this can be found in Simon's classic essay on the Architecture of Complexity. and in other places as well.The debate arises from the contention that there are important "other things" that cannot be made equal, given a completely decomposed static specification. In particular, it is suggested that parsing and interpretation operations must be partially or totally interleaved during the execuuon of a comprehension process. For practical systems, arguments are advanced that a "habitable" system, one that human clients fecl comfortable using, must be able to interpret inputs before enough information is available for a complete syntactic structure or when the syntactic information that is available does not lead to a consistent parse. It is also argued that interpretation must be performed in the middle of parsing in the interests of reasonable efficiency: the interpreter can reject sub-constituents that are semantically or pragmatically unacceptable and thereby permit early truncation of long paths of syntactic computation. From the performance model perspective, it is suggested that humans seem able to make syntactic, semantic, and pragmatic decisions in parallel, and the ability to simulate this capability is thus a condition of adequacy for any psycholinguistic model. All these arguments favor a system where the operations of parsing and interpretation are interleaved during dynamic execution, and perhaps even executed on parallel hardware (or wetware, from the PMP perspective), If parsing and interpretation are run-time indistinguishable, it is claimed, then parsing and interpretation must be part and parcel of the same monolithic process.Of course, whether or not there is dynamic fusit)n of parsing and interpetation is an empirical question which might be answered differently for practical systems than for perlbrmance models, and might even be answered differently ior different practical implementations. Depending on the relative computational efficiency of parsing versus interpretation operations, dynamic intcrlc:ning might increase or decrease ovendl system efli:'ctivcness. For example, in our work t.n the I.UNAR system /Woods. Kaolan. & Nash-Webbcr. 1q72), we fl)tmd it more ellicient to detbr semantic prt~.cssmg until after a complete, well-l~.,nncd parse had been discovered. The consistency checks embedded in the grammar could rule out syntactically unacceptable structures much more quickly than our particular interpretation component was able to do. More recendy. Martin. Church. and Ramesh (1981) have claimed that overall efficiency is greatest if all syntactic analyses are computed in breadth-fi~t fashion before any semantic operations are executed. These results might be taken to indicate that the particular semantic components were poorly conceived and implemented, with little bearing on systems where interpretation is done "properly" (or parsing is done improperly). But they do make the point that a practical decision on the dynamic fusion of parsing and interpretation cannot be made a priori, without a detailed study of the many other factors that can influence a system's computational resource demands.Whatever conclusion we arrive at from practical considerations, there is no reason to believe that it will carry over to performance modelling. The human language faculty is an evolutiol, try compromise between the requirements that language be easy to learn, easy to produce, and easy to comprehend. Because of this. our cognitive mechanisms for comprehension may exhibit acceptable but not optimal efficiency, and we would therefore expect a successful PMP to operate with psychologically appropriate inefficiencies.Thus. for performance modelling, the question can be answered only by finding eases where the various hypotheses make crucially distinct predictions concerning human capabilities, errors, or profiles of cognitive load. and then testing these predictions in a careful series of psycholinguisttc experiments. It is often debated, usually by non-linguists, whether the recta-linguistic intuitions that form the empirical foundation for much of current linguistic theory are reliable indicators of the naUve speaker's underlying competence. When it comes to questions about internal processing as opposed to structural relations, the psychological literature has demonstrated many times that intuitions are deserving of even much less trust. Thus, though we may have strong beliefs to the effect that parsing and interpretation are psychologically inseparable, our theoretical commitments should rather be based on a solid experimental footing. At this point in time. the experimental evidence is mixed: semantic and syntactic processes are interleaved on-line in many situations, but there is also evidence that these processes have a separate, relatively non-interacting run-time coup.However, no matter how the question of. dynamic fusion is ultimately resolved, it should bc clear t, ha[ dynamic interleaving or parallelism carries no implicauon of" static homogeneity. A system whose run-rune behavior has no distinguishable components may neverthelc~ have a totally dccompo~d static description. Given this possibilty, and given me evident scientific advantages that a dccornposed static spccifgation aflords. I have adopted in my own rescareh on these matters the strong working hypothesis that a statically deeomposahle sys~n co~ be constructed to provide the necessary efficiencics for practical purposes and ycL perhaps with minor modirr.ations and l'twther ~ipulations. Still supp(~n signilicant explanauons of. p~ycholingmstic phenomena.In short, I maintain the position that the "true" comprehension system will also meet our pre-theorctic notions of. scientific elegance and "beauty'. This hypothesis, that truth and beauty are highly correlated in this domain, is perhaps implausible, but it presents a challenge for theory and implementation that has held my interest and fascination for many years.
the linguistic perspective.:
While k is certainly Irue that our tools (computers and formal grammars) have shoged our views of" what human languages and human language preceding may be like, it seems a little bit strange to think that our views have been warped by those tools. Warping suggcsts, that there is rome other, more accurate view that we would have comc m either without mathematical or computational tools or with a set of formal tools with a substantially different character. There is no way in principle to exclude such a possibility, but it could hc tatar we have the tools wc have because they harmonize with the capabilities of the human mind for scientific understanding. That is. athough substantially different tools might be better suited to the phenomena under investigation, the results cleaved with [hose tools might not be humanly appreciable. "]'he views that have emerged from using our present tools might be far off the mark, but they might be the only views [hat we are c~hle OC Perhaps a more interesting statement can be made if the question is interpreted as posing a conflict between the views that we as computational linguists have come to. guided by our present practical and formal understanding of what constitutes a reasonable computation, and the views that [henretical linguisXs, philosophers, and others similarly unconstrained by concrete computation, might hold. Historically. computational Brammm~ have represented a mixture of intuitions about the significant gntctural generalizations of language and intuitions about what can be p~ efT~:ientiy, given a pani-'ular implementation that the grammar writer had in the back of his or her mind. This is certainly [rue of my own work on some of the catty ATN grammars. Along with many others, I felt an often unconscious pressure to move forward along • given computational path as long as possible before throwing my gramnmtical fate to the purser's general nondeterntioLs~ c~oice mechanisms, even though [his usually meant that feaster contents had to be manipulated in linguistically unjustified ways. For example, the standard ATN account of" passive sentcnces used register operations to •void backtracking that would re.analyze the NP that was initially parsed as an active subject. However. in so doing, the grammar confused the notions of surfare and deep suh)eets, and lost the ability to express gcnendizations concerning, for examplc, passive tag questions.In hindsighL I con~der that my early views were "warped" by both the ATN formalism, with its powerful register operations, and my understanding of the particular top-down, le•right underlying pa~ing algorithm. As [ developed the more sophisticated model of parsing embodied in my General Syntactic Processor, l realized that [here was a systematic, non-fpamrr~*_~*~J way at" holding on to funcXionally mis-assigned constituent structures. Freed from worrying about exponential constituent su'ucture nondetermism, it became possible to restrict and simplify [he ATN's register oparaUons and, ultimately, to give them a non-proceduraL algebraic interpretation. The result is a new grammatical formalism, Lexical-Functiona] Grammar CKaplan & Bresnan, in press), a forrnalisan that admits a wider class of eff¢ient computational implementations than the ATN formalism just becat~ she grammar itself" makes fewer computational commi~nen~ Moreover, it is a 104 formalism that provides for the natural statement of" many language particular and universal gencralizations, h also seems to bc a formalism d'mt fatal/tales cooperaoon between linguists and computational linguists, despite the.~" diffcnng theoretical and me[hodologeaI bmses.Just as we have been warped by our computational mechanisms, linguists have been warped by their formal tools, particularly the r~ansformational formalism.The convergence represented by Lexical-Functional Grammar is heartening in that it suggests that imperfect tools and understanding can and will evolve into better tools and deeper insights.
the interactions.:
As indicated •hove, I think computational grammars have been influenced by the algorithms that we expect to appb them with. While difficult w weed out, that influence is not a thcoretica] or practical oeces~ty. By reducing and eliminaong the computational commitments of Our grammaocal forn~ism, as we have done with Lexical-Functional Grammar, it is possible to devise a variety or different parsing schemes. By comparing and coou'asUng their behavior with different grammars and sentences, we can begin to develop a deeper understanding of [he way compulationa] resources depend on properties of grammars, smngs, and algorithms. This unders~nding is essenUal both to practic~ implementations and also to psycholinguistic modelling. Furthermore, if a formalism allows grammars to be written as an abstract characterization of string--structure correspondences, the Jp~nunm" should be indifferent as to recognition or generation. We should be •hie to implement fcasible generators as well as parsers, and again, shed light on the interdependencies of grammars and grammaucal prrx:cssmg, . Lc( me conclude with a few comments about the psychol,ogeaI validity or grammars and parsing algorithms. To the extent that a grammar cor~j.ly models a native speaker's lingtusuc compelcnce, or, less tend~Uously, the set of meta-linguistic judgments he is able to make. then ti'mt srammar has a certain psyehok~gical "validity'. h becomes much more interepang, however, if" it can •l~.J be cmpeddcd in a psychologeally accurate motel of speaking and comprehending, h.~ all cumpct¢,nce grammars will mcc~ [his additional requL,~ment, but I have the optLmis~c belief that such a grammar will ~y be found.It is also possible to find psychological validation for a parsing algorithm in the •bsence of a particular Ipmnn~. One could in principle adduce evidence to [he effect that [he architecture of [he parser, the structuring of its memory and operations, corresponds point by point to well-e,.,.,.,.,.,.,.,.,~mhl~hed cognitive mectmnisms. As • research strategy for •fraying at a psychologically valid model of comprehension, it is much more reasonable to develop linguisr.ically justified 8rammars and computationaUy motivated pmT, ing algorithms in a collaborative effort. A model with such independently motivated yet mutually compatible knowledBe and process components is much more likely to resuh in an explanatory account of [he mechanisms underlying human linguisl~ abilil~=.
:
The questions before this panel presuppose a distinction between parsing and interpretation. There are two other simple and obvious distinctions that I think are necessary for a reasonable discussion of the issues. First, we must clearly distinguish between the static specification of a process and its dynamic execution. Second, we must clearly distinguish two purposes that a natural language processing system might serve: one legitimate goal of a system is to perform some practical ~sk efficiently and well. while a second goal is to assist in developing a scientific understanding of the cognitive operations that underlie human language processing. 1 will refer to pa~rs primarily oriented towards the former goal as Practical Parsers (PP) and refer to the others as Performance Model Parsers (PMP). With these distinctions in mind. let me now turn to the questions at hand.
Appendix:
| null | null | null | null | {
"paperhash": [],
"title": [],
"abstract": [],
"authors": [],
"arxiv_id": [],
"s2_corpus_id": [],
"intents": [],
"isInfluential": []
} | null | 524 | 0.003817 | null | null | null | null | null | null | null | null |
93556328d4a4cbc07fd3f0088a622483c4fe8206 | 8836089 | null | Some Issues in Parsing and Natural Language Understanding | Lan&ua~e is a system for ancodln~ and trans~tttlnK ideas. A theory that seeks to explain llnKulstlc phenomena in terme of this fact is a fun~t~1 theory. One that does not • £sses the point. | {
"name": [
"Bobrow, Robert J. and",
"Webber, Bonnie L."
],
"affiliation": [
null,
null
]
} | null | null | 19th Annual Meeting of the Association for Computational Linguistics | 1981-06-01 | 13 | 4 | null | Our response to the questions posed to this panel is influenced by a number of beliefs (or biasesl) which we have developed in the course of building and analyzin~ the operation of several natural language understanding (NLU) systems. [I, 2, 3, 12] While the emphasis of the panel i~ on parslnK, we feel that the recovery of the syntactic structure of a natural lan~unKe utterance must be viewed as part of a larger process of reeoverlnK the meaning, intentions and goals underlying its generation.Hence it is inappropriate to consider designing or evaluatln~ natural language parsers or Erem,~ra without taking into account the architecture of the whole ~LU system of which they're a part. I This is the premise from which Our beliefs arise, beliefs which concern two thinks: o the distribution of various types of knowledge, in particular syntactic knowledge, amonK the modules of an NLU system o the information and control Flow emonK those modules.As to the first belief, in the HLU systems we have worked on, most syntactic information is localized in a "syntactic module", although that module does not produce a rallied data structure representing the syntactlo description of an utterance. Thus, if "parslnK" is taken as requlrln~ the production of such a rallied structure, then we do not believe in its necessity. However we do believe in the existence of a module which provides syntactic information to those other parts of the system whose decisions ride on it.As to the second belief, we feel that syntax, semantics and prattles effectively constitute parallel but interacting processors, and that information such as local syntactic relations is determined by Joint decisions -monk them.Our experience shows that with mlnir"al loss of efficiency, one can design these processors to interface cleanly with one another, so as to allow independent design, implementatlon and modification.We spell out these beliefs in slightly more detail below, and at greater length in [ Once the pattern of communication between processors is settled, it is easier to attach a new semnntlcs to the hooks already provided in the Kr~,mar than to build a new semantic processor.In addition, because each module ban only to consider a portion of the constraints implicit in the data (e.g. syntactic constraints, semantic constraints and discourse context), each module can be designed to optimize its own processing and provide an efficient system. The panel has also been charged wlth _ ~oslderlng paa'allel processing as a challenge to its views on parsing.Thls touches on our beliefs about the Interaction among the modules that comprise the HLU system.To respond to this issue, we first want to dlstlngulsh between two types of parallelism: one, in which many instances of the same thin6 are done at once ~ (an in an array of parallel adders-) and another, in which the many thinks done slmul~aneously can be different.Supporting this latter type of parallelism doesn*t change our view of parsing, but rather underlies it.We believe that the Interconnected processes involved in NLU must support a banjo o~eratinK pri~iple that Herman and Bobrow [14] have called "The Principle of Continually Available Output":, (CAO). This states that the Interactlng processes muat~ benin to provide output over a wide range of resource allocations, even before their analyses are complete, and even before all input data is available.We take this position for two rensons: one, it facilitates computational efficiency, and two, it seems to be closer to human parsing ~rocesses (a point which we will get to in answerlnK the next question). The added potential for interaction of such processors can increase the capability and efficiency of the overall HLU process.Thus, for example, if the syntactic module makes its intermediate decisions availableto semantics and~or pragmatlcs, then those processors can evaluate those decisions, guide syntax's future behavior and, in addition, develop in parallel their own analyses. Having sent on its latest assertlon/advlce/question, whether syntax then decides to continue on with something else or walt for a response will depend on the particular kind of message sent.Thus, the parsers and grammars that concern us are ones able to work with other appropriately designed compoconts to support CAO.While the equipment we are USing to implement and test our ideas is serial, we take very seriously the notion of parallelism.Finally under the heading of "Computational Perspective", we are anked about what might motivate our trying to make parsing procedures simulate what we suspect human parsing processes to be like.One motivation for us is the belief that natural language is so tuned to the part extraordinary, part banal cognitive capabilities of human beings that only by simulating human parsing processes can we cover all and only the language phenomena that we are called upon to process.A particular (extraordinary) aspect of hu~an cognitive (and hence, parsing) behavior that we want to explore and eventually simulate is people's ability to respond even under degraded data or resource limitations.There are examples of listeners initiating reasonable responses to an utterance even before the utterance is complete, and in some case even before a complete syntactic unit has been heard. Simultaneous translation is ode notable example [8] , and another is provided by the performance of subjects in a verbally guided assembly task reported by P. Cohen [6] . Such an ability to produce output before all input data is available (or before enough processing resources have been made available to produce the best possible response) is what led Norman and Bobrow to formulate their CAO Principle. Our interest is in architectures for NLU systems which support CAO and in • search strategies through such architectures for an opti~"l interpretation.We have been asked to comment on legitimate inferences about human linsulstic competence and performance that we can draw from our experiences with mechanical parsing of formal grammar. Our response is that whatever parsing is for natural languages, it is still only part of a larger process.Just because we know what parsing is in formal language systems, we do not secsssarily know what role it plays is in the context Of total communication.Simply put, formal notions of parsing underconstraln the goals of the syntactic component of an NLU system. Efficiency meanures, based on the resources required for generation of one or all complete parses for s sentence, without semantic or pra~e~-tlc Intera~tlon, do not secessarily specify desirable properties of a natural language syntactic analysis component.As for whether the efficiency of parsing algorlthm~ for CF or regular grammars suggest that the core of NL igremmars la CF or regular, we want to dlstlngulsh that part of perception (and hence, syntactic analysis) which groups the stimulus into recognizable units from that part which fills in gaps in in/ormatlon (inferentially) on the baals of such groups. Results in CF grammar theory says that grouping is not best dose purely bottom-up, that there are advantages to t ~ uslng predictive mechanlsms a~ well [9, 7] . Thls snggests two things for parsing natural language: I. There is a level of evidence and a process for using it that is worEing to suggest groups.2. There is another filtering, inferenclng mechanism that maEes predictions and diagnoses on the basis of those groups.It is possible that the grouping mechanism may make use of strategies applicable to CF parsing, such as wellformed substrlng tables or charts, without requiring the overall language specification be CF. In our current RUS/PSI-ELONE system, grouping is a function of the syntactic module: its output consists of suggested groupings.These snggestlons may be at abstract, specific or disjunctive.For example, an abstract description m~ht be "this is the head of an NP, everything to its left is a pre-modifler".Here there is co comment about exactly how these pre-modlflers group.A disjunctive description would consist of an explicit enumeration of all the possibilities at some point (e.g., "this is either a time prepositional phrase (PP) or an agentive PP or a locative PP, etc."). Disjunctive descriptions allow us to prune.possibilities via cane a~alysls.In short, we believe in using as much evidence from formal systemn a~ seems understandable and reasonable, to constrain what the system should be doing.Finally, we have been asked about the nature of the relationship between a gr~mar and a procedure for applying it.On the systems building side, cur feeling is that while one should be able to take a grammar and convert it to a recognition or generation procedure [I0], it is likely that such procedures will embody a whole set of principles that are control structure related, and not part of the grammar. For example, a gr',-mr seed not specify in what order to look for thln~s or in what order decisions should be made.Thus, one may not be able to reconstruct the grammar unlcuelv from a procedure for applying it.On the other hand, on the b,m-parsing side, we definitely feel that natural language is strongly tuned to both people's means of production and their means of recognition, and that principles llke MnDonalds ' Zndeliblllty Pr"Inoiple [13] or Marcus' Determinism Hypothesis [11] shape what are (and are not) seen an sentences of the language. | null | null | null | null | Main paper:
preamble:
Our response to the questions posed to this panel is influenced by a number of beliefs (or biasesl) which we have developed in the course of building and analyzin~ the operation of several natural language understanding (NLU) systems. [I, 2, 3, 12] While the emphasis of the panel i~ on parslnK, we feel that the recovery of the syntactic structure of a natural lan~unKe utterance must be viewed as part of a larger process of reeoverlnK the meaning, intentions and goals underlying its generation.Hence it is inappropriate to consider designing or evaluatln~ natural language parsers or Erem,~ra without taking into account the architecture of the whole ~LU system of which they're a part. I This is the premise from which Our beliefs arise, beliefs which concern two thinks: o the distribution of various types of knowledge, in particular syntactic knowledge, amonK the modules of an NLU system o the information and control Flow emonK those modules.As to the first belief, in the HLU systems we have worked on, most syntactic information is localized in a "syntactic module", although that module does not produce a rallied data structure representing the syntactlo description of an utterance. Thus, if "parslnK" is taken as requlrln~ the production of such a rallied structure, then we do not believe in its necessity. However we do believe in the existence of a module which provides syntactic information to those other parts of the system whose decisions ride on it.As to the second belief, we feel that syntax, semantics and prattles effectively constitute parallel but interacting processors, and that information such as local syntactic relations is determined by Joint decisions -monk them.Our experience shows that with mlnir"al loss of efficiency, one can design these processors to interface cleanly with one another, so as to allow independent design, implementatlon and modification.We spell out these beliefs in slightly more detail below, and at greater length in [ Once the pattern of communication between processors is settled, it is easier to attach a new semnntlcs to the hooks already provided in the Kr~,mar than to build a new semantic processor.In addition, because each module ban only to consider a portion of the constraints implicit in the data (e.g. syntactic constraints, semantic constraints and discourse context), each module can be designed to optimize its own processing and provide an efficient system. The panel has also been charged wlth _ ~oslderlng paa'allel processing as a challenge to its views on parsing.Thls touches on our beliefs about the Interaction among the modules that comprise the HLU system.To respond to this issue, we first want to dlstlngulsh between two types of parallelism: one, in which many instances of the same thin6 are done at once ~ (an in an array of parallel adders-) and another, in which the many thinks done slmul~aneously can be different.Supporting this latter type of parallelism doesn*t change our view of parsing, but rather underlies it.We believe that the Interconnected processes involved in NLU must support a banjo o~eratinK pri~iple that Herman and Bobrow [14] have called "The Principle of Continually Available Output":, (CAO). This states that the Interactlng processes muat~ benin to provide output over a wide range of resource allocations, even before their analyses are complete, and even before all input data is available.We take this position for two rensons: one, it facilitates computational efficiency, and two, it seems to be closer to human parsing ~rocesses (a point which we will get to in answerlnK the next question). The added potential for interaction of such processors can increase the capability and efficiency of the overall HLU process.Thus, for example, if the syntactic module makes its intermediate decisions availableto semantics and~or pragmatlcs, then those processors can evaluate those decisions, guide syntax's future behavior and, in addition, develop in parallel their own analyses. Having sent on its latest assertlon/advlce/question, whether syntax then decides to continue on with something else or walt for a response will depend on the particular kind of message sent.Thus, the parsers and grammars that concern us are ones able to work with other appropriately designed compoconts to support CAO.While the equipment we are USing to implement and test our ideas is serial, we take very seriously the notion of parallelism.Finally under the heading of "Computational Perspective", we are anked about what might motivate our trying to make parsing procedures simulate what we suspect human parsing processes to be like.One motivation for us is the belief that natural language is so tuned to the part extraordinary, part banal cognitive capabilities of human beings that only by simulating human parsing processes can we cover all and only the language phenomena that we are called upon to process.A particular (extraordinary) aspect of hu~an cognitive (and hence, parsing) behavior that we want to explore and eventually simulate is people's ability to respond even under degraded data or resource limitations.There are examples of listeners initiating reasonable responses to an utterance even before the utterance is complete, and in some case even before a complete syntactic unit has been heard. Simultaneous translation is ode notable example [8] , and another is provided by the performance of subjects in a verbally guided assembly task reported by P. Cohen [6] . Such an ability to produce output before all input data is available (or before enough processing resources have been made available to produce the best possible response) is what led Norman and Bobrow to formulate their CAO Principle. Our interest is in architectures for NLU systems which support CAO and in • search strategies through such architectures for an opti~"l interpretation.We have been asked to comment on legitimate inferences about human linsulstic competence and performance that we can draw from our experiences with mechanical parsing of formal grammar. Our response is that whatever parsing is for natural languages, it is still only part of a larger process.Just because we know what parsing is in formal language systems, we do not secsssarily know what role it plays is in the context Of total communication.Simply put, formal notions of parsing underconstraln the goals of the syntactic component of an NLU system. Efficiency meanures, based on the resources required for generation of one or all complete parses for s sentence, without semantic or pra~e~-tlc Intera~tlon, do not secessarily specify desirable properties of a natural language syntactic analysis component.As for whether the efficiency of parsing algorlthm~ for CF or regular grammars suggest that the core of NL igremmars la CF or regular, we want to dlstlngulsh that part of perception (and hence, syntactic analysis) which groups the stimulus into recognizable units from that part which fills in gaps in in/ormatlon (inferentially) on the baals of such groups. Results in CF grammar theory says that grouping is not best dose purely bottom-up, that there are advantages to t ~ uslng predictive mechanlsms a~ well [9, 7] . Thls snggests two things for parsing natural language: I. There is a level of evidence and a process for using it that is worEing to suggest groups.2. There is another filtering, inferenclng mechanism that maEes predictions and diagnoses on the basis of those groups.It is possible that the grouping mechanism may make use of strategies applicable to CF parsing, such as wellformed substrlng tables or charts, without requiring the overall language specification be CF. In our current RUS/PSI-ELONE system, grouping is a function of the syntactic module: its output consists of suggested groupings.These snggestlons may be at abstract, specific or disjunctive.For example, an abstract description m~ht be "this is the head of an NP, everything to its left is a pre-modifler".Here there is co comment about exactly how these pre-modlflers group.A disjunctive description would consist of an explicit enumeration of all the possibilities at some point (e.g., "this is either a time prepositional phrase (PP) or an agentive PP or a locative PP, etc."). Disjunctive descriptions allow us to prune.possibilities via cane a~alysls.In short, we believe in using as much evidence from formal systemn a~ seems understandable and reasonable, to constrain what the system should be doing.Finally, we have been asked about the nature of the relationship between a gr~mar and a procedure for applying it.On the systems building side, cur feeling is that while one should be able to take a grammar and convert it to a recognition or generation procedure [I0], it is likely that such procedures will embody a whole set of principles that are control structure related, and not part of the grammar. For example, a gr',-mr seed not specify in what order to look for thln~s or in what order decisions should be made.Thus, one may not be able to reconstruct the grammar unlcuelv from a procedure for applying it.On the other hand, on the b,m-parsing side, we definitely feel that natural language is strongly tuned to both people's means of production and their means of recognition, and that principles llke MnDonalds ' Zndeliblllty Pr"Inoiple [13] or Marcus' Determinism Hypothesis [11] shape what are (and are not) seen an sentences of the language.
Appendix:
| null | null | null | null | {
"paperhash": [
"bobrow|knowledge_representation_for_syntactic/semantic_processing",
"graham|an_improved_context-free_recognizer",
"marcus|a_theory_of_syntactic_recognition_for_natural_language",
"earley|an_efficient_context-free_parsing_algorithm"
],
"title": [
"Knowledge Representation for Syntactic/Semantic Processing",
"An Improved Context-Free Recognizer",
"A theory of syntactic recognition for natural language",
"An efficient context-free parsing algorithm"
],
"abstract": [
"This paper describes the RUS framework for natural language processing, in which a parser incorporating a substantial ATN grammar for English interacts with a semantic interpreter to simultaneously parse and interpret input. The structure of that interaction is discussed, including the roles played by syntactic and semantic knowledge. Several implementations of the RUS framework are currently in use, sharing the same grammar, but differing in the form of their semantic component. One of these, the PSI-KLONE system, is based on a general object-centered knowledge representation system, called KL-ONE. The operation of PSI-KLONE is described, including its use of KL-ONE to support a general inference process called \"incremental description refinement.\" The last section of the paper discusses several important criteria for knowledge representation systems to be used in syntactic and semantic processing.",
"A new algorithm for recognizing and parsing arbitrary context-free languages is presented, and several new results are given on the computational complexity of these problems. The new algorithm is of both practical and theoretical interest. It is conceptually simple and allows a variety of efficient implementations, which are worked out in detail. Two versions are given which run in faster than cubic time. Surprisingly close connections between the Cocke-Kasami-Younger and Earley algorithms are established which reveal that the two algorithms are “almost” identical.",
"Abstract : Assume that the syntax of natural language can be parsed by a left-to-right deterministic mechanism without facilities for parallelism or backup. It will be shown that this 'determinism' hypothesis, explored within the context of the grammar of English, leads to a simple mechanism, a grammar interpreter. (Author)",
"A parsing algorithm which seems to be the most efficient general context-free algorithm known is described. It is similar to both Knuth's LR(k) algorithm and the familiar top-down algorithm. It has a time bound proportional to n3 (where n is the length of the string being parsed) in general; it has an n2 bound for unambiguous grammars; and it runs in linear time on a large class of grammars, which seems to include most practical context-free programming language grammars. In an empirical comparison it appears to be superior to the top-down and bottom-up algorithms studied by Griffiths and Petrick."
],
"authors": [
{
"name": [
"R. Bobrow",
"B. Webber"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"S. Graham",
"M. Harrison",
"W. L. Ruzzo"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"Mitchell P. Marcus"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"J. Earley"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
}
]
}
],
"arxiv_id": [
null,
null,
null,
null
],
"s2_corpus_id": [
"3003106",
"1468978",
"6616065",
"35664"
],
"intents": [
[],
[],
[],
[]
],
"isInfluential": [
false,
false,
false,
false
]
} | null | 524 | 0.007634 | null | null | null | null | null | null | null | null |
2e141dcaa084c257397a88e807dc2a0ea121550d | 8710339 | null | A Construction-Specific approach to Focused Interaction in Flexible Parsing | A flexible parser can deal with input that deviates from its grammar, in addition to input that conforms to it. Ideally, such a parser will correct the deviant input: sometimes, it will be unable to correct it at all; at other times, correction will be possible, but only to within a range of ambiguous possJbilities. This paper is concerned with such ambiguous situations, and with making it as easy as possible for the ambiguity to be resolved through consultation with the user of the parser -we presume interactive use. We show the importance of asking the user for clarification in as focused a way as possible. Focused interaction of this kind is facilitated by a construction. specific approach to flexible parsing, with specialized parsing techniques for each type of construction, and specialized ambiguity representations for each type of ambiguity that a particular construction can give rise to. A construction-specific approach also aids in task-specific language development by allowing a language definibon that is natural in terms of the task domain to be interpreted directly without compilation into a uniform grammar formalism, thus greatly speeding the testing of changes to the language definition. | {
"name": [
"Hayes, Philip J."
],
"affiliation": [
null
]
} | null | null | 19th Annual Meeting of the Association for Computational Linguistics | 1981-06-01 | 6 | 16 | null | There has been considerable interest recently in the topic of flexible parsing, i.e. the parsing of input that deviates to a greater or lesser extent from the grammar expected by the parsing system. This iriterest springs from very practical concerns with the increamng use of natural language in computer interfaces. When people attempt to use such interfaces, they cannot be expected always to conform strictly to the interfece's grammar, no matter how loose and accomodating that grammar may be. Whenever people spontaneously use a language, whether natural or artificial, it is inevitable that they will make errors of performance. Accordingly, we [3] and other researchers including Weischedel and Black [6] , and Kwasny and Sondheimer [5] , have constructed flexible parsers which accept ungrammatical input, correcting the errors whenever possible, generating several alternative interpretations if more than one correction is plausible, and in cases where the input cannot be massaged into lull grammaticality, producing as complete a partial parse as possible.If a flexible parser being used as part of an interactive system cannot correct ungrammatical input with total, certainty, then the system user must be involved in the resolution of the difficulty or the confirmation of the parser's Correction. The approach taken by Weischedel and Black [6] in such situations is to inform the user about the nature of the difficulty, in the expectation that he will be able to use this information to produce a more acceptable input next time, but this can involve the user in substantial retyping. A related technique, adopted by the COOP system [4] , is to paraphrase back tO the user the one or more parses that the system has produced from the user!s input, and to allow the user to confirm the parse or select one of the ambiguous alternatives, This approach still means a certain amount of work for the user. He must check the paraphrase to see if the system has interpreted what he said correctly and without omission, and in the case of ambiguity, he must compare the several paraphrases to see which most ClOsely corresponds 1This i'e~earch ~ =k~oneoreO by the Air Force Office Of Scientific ReseMch url~" Contract F49620-79.C-0143, The views anO conclusions contained in this document thOSe Of the author and sttould not be interpreted a.s representing [he olficial policies, eJther exl~'e~e¢l or =mDlieO. ol the Air Force Ollice of Scicmlifi¢ Researcll or the US Government to what he meant, a non-trivial task if the input is lengthy and the differences small.Experience with our own flexible parser suggests that the way requests for clarification in such situations are phrased makes a big difference to the ease and accuracy with which the user can correct his errors, and that the user is most helped by a request which focuses as tightly as possible on the exact source and nature of the difficulty. Accordingly, we have adopted the following simple principle for the new flexible parser we are presently constructing: when the parser cannot uniquely resolve a problem in its input, it should as/( the user for a correction in as direct and focused a manner as l~ossible.Furthermore, this request for clarification should not prejudice the processing of the rest of the input, either before or after the problem occurs, in other words, if the system cannot parse one segment of the input, it should be able to bypass it, parse the remainder, and then ask the user to restate that and only that segment of the input. Or again, if a small part of the input' is missing or garbled and there are a limited number of possibilities for what ought to be there, the parser should be able to indicate the list of possibilities together with the context from which the information is missing rather than making the user compare several complete paraphrases of the input that differ only slightly.In what follows, we examine some of the implications of these ideas. We restrict our attention to cases in which a flexible parser can correct an input error or ungrammaticaUty, but only to within a constrained set of alternatives.We consider how to produce a focused ambiguity resolution request for the user to distinguish between such a set of corrections. We conclude that:• the problem must be tackled on a construction.specific basis,• and special representations must be devised for all the structural ambiguities that each construction type can give rise to.We illustrate these arguments with examples involving case constructions. There are additional independent reasons for adopting a construction,specific approach to flexible parsing, including increased efficiency and accuracy in correcting ungrammaticality, increased efficiency in parsing grammatical input, and ease of task.specific language definition. The first two of these are discussed in [2] , and this paper gives details of the third.In this section we report on experience with our earlier flexible parser, RexP [3] , and show why it is ill.suited to the generation of focused requests to its user for the resolution of input ambiguities. We propose solutions to the problems with FlexP. We have already incorporated these improvements into an initial version of a new flexible parser [2] .The following input is typical for an electronic mail system interface [1] with which FlexP was extensively used:The fact that this is not a complete sentence in FlexP's grammar causes no problem. The only real difficulty comes from *'Jon", which should presumably be either "Jun" or "Jan". FlexP's spelling corrector can come to the same conclusion, so the output contains two complete parses which are passed onto the next stage of the mail system interface. This schematized property list style of representation should be interpreted in the obvious way, FlexP operates by bottom.up pattern matching of a semanttc grammar of rewrite rules which allOwS it tO parse directly into this form of representation, which is the form required by the next phase of the interface.if the next stage has access to other contextual information which allows it conclude that one or other of these parses was what was intended, then it can procede to fulfill the user's request. Otherwise it has little choice but to ask a Question involving paraphrases of each of the amDiguous interpretations, such as:Because it is not focused on the source of the error, this Question gives the user very little held in seeing where the problem with his input actually lies• Furthermore. the systems representation of the ambiguity as several complete parses gives Jt very little help in understanding a response of "June" from the user, a very natural.and likely one in the circumstances. In essence, the parser has thrown away the information on the specific source of the ambiguity that it once had. and would again need to deal adequately with that response from the user. This representation is exactly like the one above except that the Month slot is tilled by an AmbiguitySet record. This record allows the ambiguity between january and june to be confined to the month slot where it belongs rather than expanding to an ambiguity of the entire input as in the first approach we discussed. By expressing the ambiguity set ssa disjunction, it would be straightforward to generate from this representation a much m_"re focused request for clarification such as:January or June 5?A reply of "June" would also De much easier to deal with.However. this approach only works if the aml~iguity corresponds tO an entire slot filler. Suppose. for example, that inste,~d of mistyping the montl~, the user omitted or ,~o completely garbled the preposition "from" that the parser effectmvely saw:the messages Fred Smith that arrived after Jan 5In the grammar used by FlexP for this particular application, the connexion between Fred Smith and the message could have been expressed (to within synonyms) only by "from", "to". or "copied to", FlexP can deal with this input, and correct it tO within this three way ambiguity. To represent the ambiguity, it generates three complete parses isomorphic to the first output example above, except that Sender is replaced by Recipient and CC in the second and third parses respectively. Again, this form of representation does not allow the System tO ask a focused question about the source of the ambiguity or interpret naturally elliptical replies to a request to distinguish between the three alternatives. The previous solution is not applicable because the ambiguity lies in the structure of the parser output rather than at one of its terminal nodes. Using a case notation, it is not permissible to gut an "AmbiguitySet" in place of one of the deep case markers. This example parser output is similar to the two given previously, but instead of having a Sender slot, it has an AmbiguousSIots slot. The filler of this slot is a list of records, each of which specifies a SlotFiller and a list of PossibleSIots. The SIolFiller is a structure that would normally be • the filler of a slot in the top-level description (of a message in this case), but the parser has been unable to determine exactly which higher.level slot it shou#d fit into: the possibilities are given in PossibleSIots. With this representation, it is now straightforward to construct a directed question such as:Do you mean the messages from, to, or copied to Fred Smith that arrived after January 5?Such Questions can be generated by outputting AmbiguousSIot records as the disjunction (in boldface) of the normal case markers for each of the Poss=bleSlots followed by the normal translation of the SlotFiller. The main point here, however, does not concern the question generation mechanism, nor the exact deta, ls of the formalism for representing ambiguity, it is. rather, that a radical revision of the initial formalism was necassar~ in order tO represent structural ambiguities without duplicat=on of non-ambiguous material.The adoption of such representations for ambiguity has profound implications for the parsing strategies employed by any parser which tries to produce them. For each type of construction that such a parser can encounter, and here we mean construction types at the level of case construction, conjoined list, linear fixed-order pattern, the parser muSt "know" about ell the structural ambiguities that the construction can give rise to, and must be prepared to detect and encode appropriately such ambiguities when they arise. We have chosen tO achieve this by des=gnmg a number of different parsing strategies, one for each type of construction that will be encountered, and making the parser Switch 2Nor rs this DroDlem merely an arlifact of case r~otatlon, tt would arise in exaclty the sanle way for a stanttarcl syntactic parSe Of a serltence such as tile well known "1 Sew tile G=*&rl (3 Canyon flying to New York•" The ddhcully dr=see beCauSe the ami 0mgu=ty ¢s structural, structural arnblt'JllJtleS c~n occur no ma~er ~nat form of structure rs crtosen. between these strategies dynamically. Each such construction-specific parsing strategy encodes detailed information about the types of structural ambiguity possible with that construction and incorporates the specific information necessary to detect and represent these ambiguities. | null | There are additional independent reasons for adopting a construction-s~oecific approach to flexible parsing. Our initially motivating reason was that dynamically selected constructidn.specific parsing strategies can make corrections to erroneous input more accurately and efficiently than a uniform parsing procedure, it also turned out that such an approach provided significant advantages in the parsing of correct input as well. These points are covered in detail in [2] .A further advantage is related to language definition. Since, our initial flexible parser, FlexP, applied its uniform parsing strategy to a uniform grammar of pattern.matching rewrite rules, it was not possible to cover constructions like the one used in the examples above in a single grammar rule. A gostnominal case frame such as the one that covers the message descriptions used as examples above must be .spread over several rewrite rules. The patterns actually used in RexP look like: <?determiner "MessageAdj 14essageHead *MessageCase> <%from Person> <Y,s t nee Date>The first top.level pattern says that a message description is an optional (?) determiner, followed by an arbitrary number (') of message adjectives followed by a message head word (one meaning "message"), followed by an arbitrary number of message cases. Because each case has more than ont~ component, each must be recognized by a separate pattern like the second and third above. Here % means anything in the same word class, "that arrived after", for instance, is equivalent to "since" for this purpose.The point here is not the details of the pattern notation, but the fact that this is a very unnatural way of representing a postnominal case construction, Not only does it cause problems for a flexible parser, as explained in [2] , but it is also quite inconvenient to create in the first place. Essentially, one has to know the specific trick of creating intermediate, and from the language point of view, superfluous categories like MeesageCase in the example above. Since, we designed FlexP as a tool for use in natural language interfaces, we considered it unreasonable to expect the designer of such a system to have the specialized knowledge to create such obscure rules. Accordingly, we designed a language definition formalism that enabled a grammar to be specified in terms much more natural to the system being interfaced to. The above construction for the description of a message, for instance, could be defined as a single unified construction without specifying any artificial intermediate constituents, as follows: In addition to the syntax of a message description, this piece of formalism also describes the internal structure of a message, and is intended for use with a larger interface system [1] of which FlexP is a part. The larger system provides an interface to a functional subsystem or tool, and is tool-independent in the sense that it is driven by a declarative data base in which the objects and operations of the tool currently being interfaced to are defined in the formalism shown. The example is, in fact, an abbreviated version of the definition of a message from the declarative tool description for an electronic mail system tool with which, the interface was actually used.In the example, the Syntax slot defines the input syntax for a message; it is used to generate rules for RexP, which ere in turn used to parse input descriptions of messages from a user. FlexP's grammar to parse input for the mail system tool is the onion of all the rules compiled in this way from the Syntax fields of ell the objects and operations in the tool description. The SyntaX field of the example says that the syntax for a message is that of a noun phrase, i.e. any of the given head nouns (angle brackets indicate Oatterns of words), followed by any of the given postnominal Cases, preceded by any adjectives -none are given here, which can in turn be preceded by a determiner. The up.arrows in the Case patterns refer beck to slots of a message, as specified in the Scheme slOt of the example -the information in the Schema sl0t is aJso used by other parts of the interface. The actual grammar rules needed by FlexP are generated by first filling in a pre-stored skeleton pattern for NounPhrase, resulting in: <?determiner ,NesssgeAdJ MesssgeHead ,NessegeCass~;and then generating patterns for each of the Cases, substituting the appropriate FillerTypes for the slot names that appear in the patterns used to define the Cases, thus generating the subpatterns:<~[from Person> <%to Person> <Zdated Data> <Zslnce Date>The slot names are not discarded but used in the results of the subrules to ensure that the objects which match the substituted FillerTypes and up in the correct slot of the result produced by the top-level message rule. This compilation procedure must be performed in its entirety before any input parsing can be undertaken.While this approach to language definition was successful in freeing the language designer from having to know details of the parser essentially irrelevant tO him, it also made the process of language development very much slower. Every time the designer wished to make the smallest change to the grammar, it was necessary to go through the time-consuming compilation procedure. Since the development of a task.specific language typically involves many small changes, this has proved a significant impediment to the usefulness of FlexP.The construction-specific approach offers a way round this problem. Since the parsing strategies and amOiguity representations are specific to particular constructions, it is possible to represent each different type of construction differently -there is no need to translate the language into a uniformly represented grammar. In addition, the constructions in terms of which it iS natural to define a language are exactly those for which there will be specific parsing strategies, and grammar representations. It therefore becomes possible to dispense with the coml~ilation step reauired for FlexP, and instead interpret the language definition directly. This drastically cuts the time needed to make changes to the grammar, and so makes the parsing system much more useful. For example, the Syntax slot of the previous example formalism might become: This grammar representation, equally convenient from a user's point of view, should be directly interpretable by a .parser specific to the NounPhrase case type of construction. All the information needed by such a parser, including a list of all the case markers, and the type of oblect that fills each case slot is directly enough accessible from this representation that an intermediate compilation phase should not be required, with all the ensuing benefits mentioned above for language development. | null | There will be many occasions, even for a flexible parser, when complete, unambiguous parsing of the input tO an interactive system is impossible. In such circumstances, the parser should interact with the system user to resolve the problem. Moreover, to make things as easy as possible for the user, the system should phrase its request for clarafication in terms that fOCUS as tightly as possible on the real source and nature of the difficulty. In the case of ambiguity resolution, this means that the parser must produce a representation of the ambiguity that does not duplicate unambiguous material, This implies specific ambiguity rel~resentations for each b/De of construction recognized by the parser, and corresponding specific parSthg strategies to generate such representations. There are other advantages to a constructionspecific approach including more accurate and efficient correction of ungrammaticality, more efficient parsing of grammatical input, and easier task.specific language development. This final benefit arises because a construction.specific approach allows a language definition that is natural in terms of the task domain to be interpreted directly without compilation into a uniform grammar formalism, thus greatly speeding the testing of changes to the language definition. | Main paper:
introduction:
There has been considerable interest recently in the topic of flexible parsing, i.e. the parsing of input that deviates to a greater or lesser extent from the grammar expected by the parsing system. This iriterest springs from very practical concerns with the increamng use of natural language in computer interfaces. When people attempt to use such interfaces, they cannot be expected always to conform strictly to the interfece's grammar, no matter how loose and accomodating that grammar may be. Whenever people spontaneously use a language, whether natural or artificial, it is inevitable that they will make errors of performance. Accordingly, we [3] and other researchers including Weischedel and Black [6] , and Kwasny and Sondheimer [5] , have constructed flexible parsers which accept ungrammatical input, correcting the errors whenever possible, generating several alternative interpretations if more than one correction is plausible, and in cases where the input cannot be massaged into lull grammaticality, producing as complete a partial parse as possible.If a flexible parser being used as part of an interactive system cannot correct ungrammatical input with total, certainty, then the system user must be involved in the resolution of the difficulty or the confirmation of the parser's Correction. The approach taken by Weischedel and Black [6] in such situations is to inform the user about the nature of the difficulty, in the expectation that he will be able to use this information to produce a more acceptable input next time, but this can involve the user in substantial retyping. A related technique, adopted by the COOP system [4] , is to paraphrase back tO the user the one or more parses that the system has produced from the user!s input, and to allow the user to confirm the parse or select one of the ambiguous alternatives, This approach still means a certain amount of work for the user. He must check the paraphrase to see if the system has interpreted what he said correctly and without omission, and in the case of ambiguity, he must compare the several paraphrases to see which most ClOsely corresponds 1This i'e~earch ~ =k~oneoreO by the Air Force Office Of Scientific ReseMch url~" Contract F49620-79.C-0143, The views anO conclusions contained in this document thOSe Of the author and sttould not be interpreted a.s representing [he olficial policies, eJther exl~'e~e¢l or =mDlieO. ol the Air Force Ollice of Scicmlifi¢ Researcll or the US Government to what he meant, a non-trivial task if the input is lengthy and the differences small.Experience with our own flexible parser suggests that the way requests for clarification in such situations are phrased makes a big difference to the ease and accuracy with which the user can correct his errors, and that the user is most helped by a request which focuses as tightly as possible on the exact source and nature of the difficulty. Accordingly, we have adopted the following simple principle for the new flexible parser we are presently constructing: when the parser cannot uniquely resolve a problem in its input, it should as/( the user for a correction in as direct and focused a manner as l~ossible.Furthermore, this request for clarification should not prejudice the processing of the rest of the input, either before or after the problem occurs, in other words, if the system cannot parse one segment of the input, it should be able to bypass it, parse the remainder, and then ask the user to restate that and only that segment of the input. Or again, if a small part of the input' is missing or garbled and there are a limited number of possibilities for what ought to be there, the parser should be able to indicate the list of possibilities together with the context from which the information is missing rather than making the user compare several complete paraphrases of the input that differ only slightly.In what follows, we examine some of the implications of these ideas. We restrict our attention to cases in which a flexible parser can correct an input error or ungrammaticaUty, but only to within a constrained set of alternatives.We consider how to produce a focused ambiguity resolution request for the user to distinguish between such a set of corrections. We conclude that:• the problem must be tackled on a construction.specific basis,• and special representations must be devised for all the structural ambiguities that each construction type can give rise to.We illustrate these arguments with examples involving case constructions. There are additional independent reasons for adopting a construction,specific approach to flexible parsing, including increased efficiency and accuracy in correcting ungrammaticality, increased efficiency in parsing grammatical input, and ease of task.specific language definition. The first two of these are discussed in [2] , and this paper gives details of the third.In this section we report on experience with our earlier flexible parser, RexP [3] , and show why it is ill.suited to the generation of focused requests to its user for the resolution of input ambiguities. We propose solutions to the problems with FlexP. We have already incorporated these improvements into an initial version of a new flexible parser [2] .The following input is typical for an electronic mail system interface [1] with which FlexP was extensively used:The fact that this is not a complete sentence in FlexP's grammar causes no problem. The only real difficulty comes from *'Jon", which should presumably be either "Jun" or "Jan". FlexP's spelling corrector can come to the same conclusion, so the output contains two complete parses which are passed onto the next stage of the mail system interface. This schematized property list style of representation should be interpreted in the obvious way, FlexP operates by bottom.up pattern matching of a semanttc grammar of rewrite rules which allOwS it tO parse directly into this form of representation, which is the form required by the next phase of the interface.if the next stage has access to other contextual information which allows it conclude that one or other of these parses was what was intended, then it can procede to fulfill the user's request. Otherwise it has little choice but to ask a Question involving paraphrases of each of the amDiguous interpretations, such as:Because it is not focused on the source of the error, this Question gives the user very little held in seeing where the problem with his input actually lies• Furthermore. the systems representation of the ambiguity as several complete parses gives Jt very little help in understanding a response of "June" from the user, a very natural.and likely one in the circumstances. In essence, the parser has thrown away the information on the specific source of the ambiguity that it once had. and would again need to deal adequately with that response from the user. This representation is exactly like the one above except that the Month slot is tilled by an AmbiguitySet record. This record allows the ambiguity between january and june to be confined to the month slot where it belongs rather than expanding to an ambiguity of the entire input as in the first approach we discussed. By expressing the ambiguity set ssa disjunction, it would be straightforward to generate from this representation a much m_"re focused request for clarification such as:January or June 5?A reply of "June" would also De much easier to deal with.However. this approach only works if the aml~iguity corresponds tO an entire slot filler. Suppose. for example, that inste,~d of mistyping the montl~, the user omitted or ,~o completely garbled the preposition "from" that the parser effectmvely saw:the messages Fred Smith that arrived after Jan 5In the grammar used by FlexP for this particular application, the connexion between Fred Smith and the message could have been expressed (to within synonyms) only by "from", "to". or "copied to", FlexP can deal with this input, and correct it tO within this three way ambiguity. To represent the ambiguity, it generates three complete parses isomorphic to the first output example above, except that Sender is replaced by Recipient and CC in the second and third parses respectively. Again, this form of representation does not allow the System tO ask a focused question about the source of the ambiguity or interpret naturally elliptical replies to a request to distinguish between the three alternatives. The previous solution is not applicable because the ambiguity lies in the structure of the parser output rather than at one of its terminal nodes. Using a case notation, it is not permissible to gut an "AmbiguitySet" in place of one of the deep case markers. This example parser output is similar to the two given previously, but instead of having a Sender slot, it has an AmbiguousSIots slot. The filler of this slot is a list of records, each of which specifies a SlotFiller and a list of PossibleSIots. The SIolFiller is a structure that would normally be • the filler of a slot in the top-level description (of a message in this case), but the parser has been unable to determine exactly which higher.level slot it shou#d fit into: the possibilities are given in PossibleSIots. With this representation, it is now straightforward to construct a directed question such as:Do you mean the messages from, to, or copied to Fred Smith that arrived after January 5?Such Questions can be generated by outputting AmbiguousSIot records as the disjunction (in boldface) of the normal case markers for each of the Poss=bleSlots followed by the normal translation of the SlotFiller. The main point here, however, does not concern the question generation mechanism, nor the exact deta, ls of the formalism for representing ambiguity, it is. rather, that a radical revision of the initial formalism was necassar~ in order tO represent structural ambiguities without duplicat=on of non-ambiguous material.The adoption of such representations for ambiguity has profound implications for the parsing strategies employed by any parser which tries to produce them. For each type of construction that such a parser can encounter, and here we mean construction types at the level of case construction, conjoined list, linear fixed-order pattern, the parser muSt "know" about ell the structural ambiguities that the construction can give rise to, and must be prepared to detect and encode appropriately such ambiguities when they arise. We have chosen tO achieve this by des=gnmg a number of different parsing strategies, one for each type of construction that will be encountered, and making the parser Switch 2Nor rs this DroDlem merely an arlifact of case r~otatlon, tt would arise in exaclty the sanle way for a stanttarcl syntactic parSe Of a serltence such as tile well known "1 Sew tile G=*&rl (3 Canyon flying to New York•" The ddhcully dr=see beCauSe the ami 0mgu=ty ¢s structural, structural arnblt'JllJtleS c~n occur no ma~er ~nat form of structure rs crtosen. between these strategies dynamically. Each such construction-specific parsing strategy encodes detailed information about the types of structural ambiguity possible with that construction and incorporates the specific information necessary to detect and represent these ambiguities.
other reasons for a construction-specific approach:
There are additional independent reasons for adopting a construction-s~oecific approach to flexible parsing. Our initially motivating reason was that dynamically selected constructidn.specific parsing strategies can make corrections to erroneous input more accurately and efficiently than a uniform parsing procedure, it also turned out that such an approach provided significant advantages in the parsing of correct input as well. These points are covered in detail in [2] .A further advantage is related to language definition. Since, our initial flexible parser, FlexP, applied its uniform parsing strategy to a uniform grammar of pattern.matching rewrite rules, it was not possible to cover constructions like the one used in the examples above in a single grammar rule. A gostnominal case frame such as the one that covers the message descriptions used as examples above must be .spread over several rewrite rules. The patterns actually used in RexP look like: <?determiner "MessageAdj 14essageHead *MessageCase> <%from Person> <Y,s t nee Date>The first top.level pattern says that a message description is an optional (?) determiner, followed by an arbitrary number (') of message adjectives followed by a message head word (one meaning "message"), followed by an arbitrary number of message cases. Because each case has more than ont~ component, each must be recognized by a separate pattern like the second and third above. Here % means anything in the same word class, "that arrived after", for instance, is equivalent to "since" for this purpose.The point here is not the details of the pattern notation, but the fact that this is a very unnatural way of representing a postnominal case construction, Not only does it cause problems for a flexible parser, as explained in [2] , but it is also quite inconvenient to create in the first place. Essentially, one has to know the specific trick of creating intermediate, and from the language point of view, superfluous categories like MeesageCase in the example above. Since, we designed FlexP as a tool for use in natural language interfaces, we considered it unreasonable to expect the designer of such a system to have the specialized knowledge to create such obscure rules. Accordingly, we designed a language definition formalism that enabled a grammar to be specified in terms much more natural to the system being interfaced to. The above construction for the description of a message, for instance, could be defined as a single unified construction without specifying any artificial intermediate constituents, as follows: In addition to the syntax of a message description, this piece of formalism also describes the internal structure of a message, and is intended for use with a larger interface system [1] of which FlexP is a part. The larger system provides an interface to a functional subsystem or tool, and is tool-independent in the sense that it is driven by a declarative data base in which the objects and operations of the tool currently being interfaced to are defined in the formalism shown. The example is, in fact, an abbreviated version of the definition of a message from the declarative tool description for an electronic mail system tool with which, the interface was actually used.In the example, the Syntax slot defines the input syntax for a message; it is used to generate rules for RexP, which ere in turn used to parse input descriptions of messages from a user. FlexP's grammar to parse input for the mail system tool is the onion of all the rules compiled in this way from the Syntax fields of ell the objects and operations in the tool description. The SyntaX field of the example says that the syntax for a message is that of a noun phrase, i.e. any of the given head nouns (angle brackets indicate Oatterns of words), followed by any of the given postnominal Cases, preceded by any adjectives -none are given here, which can in turn be preceded by a determiner. The up.arrows in the Case patterns refer beck to slots of a message, as specified in the Scheme slOt of the example -the information in the Schema sl0t is aJso used by other parts of the interface. The actual grammar rules needed by FlexP are generated by first filling in a pre-stored skeleton pattern for NounPhrase, resulting in: <?determiner ,NesssgeAdJ MesssgeHead ,NessegeCass~;and then generating patterns for each of the Cases, substituting the appropriate FillerTypes for the slot names that appear in the patterns used to define the Cases, thus generating the subpatterns:<~[from Person> <%to Person> <Zdated Data> <Zslnce Date>The slot names are not discarded but used in the results of the subrules to ensure that the objects which match the substituted FillerTypes and up in the correct slot of the result produced by the top-level message rule. This compilation procedure must be performed in its entirety before any input parsing can be undertaken.While this approach to language definition was successful in freeing the language designer from having to know details of the parser essentially irrelevant tO him, it also made the process of language development very much slower. Every time the designer wished to make the smallest change to the grammar, it was necessary to go through the time-consuming compilation procedure. Since the development of a task.specific language typically involves many small changes, this has proved a significant impediment to the usefulness of FlexP.The construction-specific approach offers a way round this problem. Since the parsing strategies and amOiguity representations are specific to particular constructions, it is possible to represent each different type of construction differently -there is no need to translate the language into a uniformly represented grammar. In addition, the constructions in terms of which it iS natural to define a language are exactly those for which there will be specific parsing strategies, and grammar representations. It therefore becomes possible to dispense with the coml~ilation step reauired for FlexP, and instead interpret the language definition directly. This drastically cuts the time needed to make changes to the grammar, and so makes the parsing system much more useful. For example, the Syntax slot of the previous example formalism might become: This grammar representation, equally convenient from a user's point of view, should be directly interpretable by a .parser specific to the NounPhrase case type of construction. All the information needed by such a parser, including a list of all the case markers, and the type of oblect that fills each case slot is directly enough accessible from this representation that an intermediate compilation phase should not be required, with all the ensuing benefits mentioned above for language development.
conclusion:
There will be many occasions, even for a flexible parser, when complete, unambiguous parsing of the input tO an interactive system is impossible. In such circumstances, the parser should interact with the system user to resolve the problem. Moreover, to make things as easy as possible for the user, the system should phrase its request for clarafication in terms that fOCUS as tightly as possible on the real source and nature of the difficulty. In the case of ambiguity resolution, this means that the parser must produce a representation of the ambiguity that does not duplicate unambiguous material, This implies specific ambiguity rel~resentations for each b/De of construction recognized by the parser, and corresponding specific parSthg strategies to generate such representations. There are other advantages to a constructionspecific approach including more accurate and efficient correction of ungrammaticality, more efficient parsing of grammatical input, and easier task.specific language development. This final benefit arises because a construction.specific approach allows a language definition that is natural in terms of the task domain to be interpreted directly without compilation into a uniform grammar formalism, thus greatly speeding the testing of changes to the language definition.
Appendix:
| null | null | null | null | {
"paperhash": [
"carbonell|dynamic_strategy_selection_in_flexible_parsing",
"hayes|flexible_parsing",
"kwasny|ungrammaticality_and_extra-grammaticality_in_natural_language_understanding_systems"
],
"title": [
"Dynamic Strategy Selection in Flexible Parsing",
"Flexible Parsing",
"Ungrammaticality and Extra-Grammaticality in Natural Language Understanding Systems"
],
"abstract": [
"Robust natural language interpretation requires strong semantic domain models, \"fail-soft\" recovery heuristics, and very flexible control structures. Although single-strategy parsers have met with a measure of success, a multi-strategy approach is shown to provide a much higher degree of flexibility, redundancy, and ability to bring task-specific domain knowledge (in addition to general linguistic knowledge) to bear on both grammatical and ungrammatical input. A parsing algorithm is presented that integrates several different parsing strategies, with case-frame instantiation dominating. Each of these parsing strategies exploits different types of knowledge; and their combination provides a strong framework in which to process conjunctions, fragmentary input, and ungrammatical structures, as well as less exotic, grammatically correct input. Several specific heuristics for handling ungrammatical input are presented within this multi-strategy framework.",
"When people use natural language in natural settings, they often use it ungrammatically, missing out or repeating words, breaking-off and restarting, speaking in fragments, etc., Their human listeners are usually able to cope with these deviations with little difficulty. If a computer system wishes to accept natural language input from its users on a routine basis, it must display a similar indifference. In this paper, we outline a set of parsing flexibilities that such a system should provide. We go on to describe FlexP. a bottom-up pattern-matching parser that we have designed and implemented to provide these flexibilities for restricted natural language input to a limited-domain computer system.",
"Among the components included in Natural Language Understanding (NLU) systems is a grammar which spec i f i es much o f the l i n g u i s t i c s t ruc tu re o f the ut terances tha t can be expected. However, i t is ce r ta in tha t inputs that are ill-formed with respect to the grammar will be received, both because people regularly form ungra=cmatical utterances and because there are a variety of forms that cannot be readily included in current grammatical models and are hence \"extra-grammatical\". These might be rejected, but as Wilks stresses, \"...understanding requires, at the very least, ... some attempt to interpret, rather than merely reject, what seem to be ill-formed utterances.\" [WIL76]"
],
"authors": [
{
"name": [
"J. Carbonell",
"P. Hayes"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"P. Hayes",
"G. Mouradian"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"S. Kwasny",
"N. Sondheimer"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
}
],
"arxiv_id": [
null,
null,
null
],
"s2_corpus_id": [
"7271323",
"11007680",
"12695499"
],
"intents": [
[],
[],
[]
],
"isInfluential": [
false,
false,
false
]
} | Problem: The paper addresses the challenge of parsing input that deviates from expected grammar in natural language interfaces, requiring correction and resolution of ambiguities through user interaction.
Solution: The paper proposes a construction-specific approach to flexible parsing, where specialized parsing techniques are used for each type of construction, and specialized ambiguity representations are employed to facilitate focused interaction with the user for resolving ambiguities efficiently. | 524 | 0.030534 | null | null | null | null | null | null | null | null |
0ad56623eea8b97cf898cc78a2df4a38e6d59fea | 29571804 | null | Artificial Intelligence Corporation | The INTELLECT natural language database query system, a product of Artificial Intelligence Corporation, is the only commercially available system with true English query capability. Based on experience with INTELLECT in the areas of quality assurance and customer support, a number of issues in evaluating a natural language database query system, particularly the INTELLECT system, will be discussed. A, I. Corporation offers licenses for customers to use the INTELLECT software on their computers, to access their databases. We now have a number of customer installations, plus reports from companies that are marketing INTELLECT under agreements with us, so that we can begin to discuss user reactions as possible criteria for evaluating our system. | {
"name": [
"Crout, J. Norwood"
],
"affiliation": [
null
]
} | null | null | 19th Annual Meeting of the Association for Computational Linguistics | 1981-06-01 | 0 | 1 | null | A, I. Corporation offers licenses for customers to use the INTELLECT software on their computers, to access their databases. We now have a number of customer installations, plus reports from companies that are marketing INTELLECT under agreements with us, so that we can begin to discuss user reactions as possible criteria for evaluating our system.INTELLECT's basic function is to translate typed English queries into retrieval commands for a database management system, then present the retrieved data, or answers based on it, to the terminal user. It is a general software tool, which can be easily applied to a wide variety of databases and user environments. For each database, a Lexicon, or dictionary, must be prepared. The Lexicon describes the words and phrases relevant to the data and how they relate to the data items. The system maintains a log of all queries, for analysis of its performance.Artificial Intelligence Corporation was founded about five years ago, for the specific purpose of developing and marketing an English language database query product. INTELLECT was the creation of Dr. Larry Harris, who presently supervises its ou-golng development. The company has been successful in developing a marketable product and now looks forward to sisnlficant expansion of both its customer base and its product line. Versions of the product presently exist for interfacing with ADABAS, VSAM, Multics Relational Data Store, and A. I. Corporation's own Derived File Access Method. Additional interfaces, including one to Cullinane's Integrated Database Management System, are nearing completion.A. I. Corporation's quality assurance program tests the ability of the system to perform all of its intended retrieval, processing, and data presentation functions. We also test its fluency: its ability to understand, retrieve, and process requests that are expressed in a wide variety of English phrasings. Part of this fluency testing consists of free-wheellng queries, but a major component of it is conducted in a formalized way: a number of phrases (between 20 and 50) are chosen, each of which represents either selection of records, specification of the data items or expressions to be retrieved, or the formatting and processing to be performed. A query generator program then selects different combinations of these phrases and, for each set of phrases, generates queries by arranging the phrases in different permutations, with and without connecting prepositions, conjunctions, and aruicles. The file of queries is then processed by the INTELLECT system in a batch mode, and the resulting transcript of queries and responses is scanned to look for instances of improper interpretation. Such a file of queries will contain, in addition to reasonable English sentences, both sentence fragments and unnatural phrasings. This kind of test is desirable, since users who are familiar with the system will frequently enter only those words and phrases chat are necessary to express their needs, with little regard for English syntax, in order to minimize the number of key-strokes. The system in fact performs quite well with such terse queries, and users appreciate this capability. Query statistics from this kind of testing are not meaningful as a measure of system fluency since many of the queries were deliberately phrased in an un-English way.In addition to our testing program, information on INTELLECT's performance comes from the experiences of our customers. Customer evaluations of its fluency are uniformly good; there is a lot of enthusiasm for this technical achievement and its usefulness. Statistics on • several hundred queries from two customer sites are presented. They show a high rate of successful processing of queries. The main conclusion to be drawn from this is chat the users are able to communicate effectively with INTELLECT in their environment.INTELLECT's basic capability is data retrieval. Within the language domain defined by the retrieval semantics of the particular DBMS and the vocabulary of the particular database, INTELLECT's understanding is fluent.INTELLECT's capabilities go beyond simple retrieval, however. It can refer back to previous queries, do arithmetic calculations with numeric fields, calculate basic functions such as maximum and total, sort and break down records in categories, and vary its output format. Through this ausmentatlon of its retrieval capability, INTELLECT has become more useful in a business environment, but the expanded language domain is not so easily charaeterlzed, or described, to naive users.A big advantage of English language query systems is the absence of training as a requirement for its use; this permits people to access data who are unwilling or unable to learn how to use a structured query system. All that is required is that a person know enough about the data to be able to pose a meaningful question and be able to type on a terminal keyboard. INTELLECT is a very attractive system for such casual or technically unsophisticated users. Such people, however, often do not have a clear concept of the data model being used and cannot distinguish between the data retrieval, summarization, or categorization of retrieved data which INTELLECT can do, and more complex processing. They may ask for thlngs that are outside the system's functional capabilities and, hence, its domain of language comprehension.In st-,~-ry, we feel that INTELLECT has effectively solved the man-machine communication problem for database retrieval, within its realm of applicability. We are now addressing the question of what business environments are best served by Engllsh-languaEe database retrieval while at the same time continuing our development by si~ificantly expanding INTELLECT's semantic, and hence its lin~uistlc, domain. | null | null | null | null | Main paper:
:
A, I. Corporation offers licenses for customers to use the INTELLECT software on their computers, to access their databases. We now have a number of customer installations, plus reports from companies that are marketing INTELLECT under agreements with us, so that we can begin to discuss user reactions as possible criteria for evaluating our system.INTELLECT's basic function is to translate typed English queries into retrieval commands for a database management system, then present the retrieved data, or answers based on it, to the terminal user. It is a general software tool, which can be easily applied to a wide variety of databases and user environments. For each database, a Lexicon, or dictionary, must be prepared. The Lexicon describes the words and phrases relevant to the data and how they relate to the data items. The system maintains a log of all queries, for analysis of its performance.Artificial Intelligence Corporation was founded about five years ago, for the specific purpose of developing and marketing an English language database query product. INTELLECT was the creation of Dr. Larry Harris, who presently supervises its ou-golng development. The company has been successful in developing a marketable product and now looks forward to sisnlficant expansion of both its customer base and its product line. Versions of the product presently exist for interfacing with ADABAS, VSAM, Multics Relational Data Store, and A. I. Corporation's own Derived File Access Method. Additional interfaces, including one to Cullinane's Integrated Database Management System, are nearing completion.A. I. Corporation's quality assurance program tests the ability of the system to perform all of its intended retrieval, processing, and data presentation functions. We also test its fluency: its ability to understand, retrieve, and process requests that are expressed in a wide variety of English phrasings. Part of this fluency testing consists of free-wheellng queries, but a major component of it is conducted in a formalized way: a number of phrases (between 20 and 50) are chosen, each of which represents either selection of records, specification of the data items or expressions to be retrieved, or the formatting and processing to be performed. A query generator program then selects different combinations of these phrases and, for each set of phrases, generates queries by arranging the phrases in different permutations, with and without connecting prepositions, conjunctions, and aruicles. The file of queries is then processed by the INTELLECT system in a batch mode, and the resulting transcript of queries and responses is scanned to look for instances of improper interpretation. Such a file of queries will contain, in addition to reasonable English sentences, both sentence fragments and unnatural phrasings. This kind of test is desirable, since users who are familiar with the system will frequently enter only those words and phrases chat are necessary to express their needs, with little regard for English syntax, in order to minimize the number of key-strokes. The system in fact performs quite well with such terse queries, and users appreciate this capability. Query statistics from this kind of testing are not meaningful as a measure of system fluency since many of the queries were deliberately phrased in an un-English way.In addition to our testing program, information on INTELLECT's performance comes from the experiences of our customers. Customer evaluations of its fluency are uniformly good; there is a lot of enthusiasm for this technical achievement and its usefulness. Statistics on • several hundred queries from two customer sites are presented. They show a high rate of successful processing of queries. The main conclusion to be drawn from this is chat the users are able to communicate effectively with INTELLECT in their environment.INTELLECT's basic capability is data retrieval. Within the language domain defined by the retrieval semantics of the particular DBMS and the vocabulary of the particular database, INTELLECT's understanding is fluent.INTELLECT's capabilities go beyond simple retrieval, however. It can refer back to previous queries, do arithmetic calculations with numeric fields, calculate basic functions such as maximum and total, sort and break down records in categories, and vary its output format. Through this ausmentatlon of its retrieval capability, INTELLECT has become more useful in a business environment, but the expanded language domain is not so easily charaeterlzed, or described, to naive users.A big advantage of English language query systems is the absence of training as a requirement for its use; this permits people to access data who are unwilling or unable to learn how to use a structured query system. All that is required is that a person know enough about the data to be able to pose a meaningful question and be able to type on a terminal keyboard. INTELLECT is a very attractive system for such casual or technically unsophisticated users. Such people, however, often do not have a clear concept of the data model being used and cannot distinguish between the data retrieval, summarization, or categorization of retrieved data which INTELLECT can do, and more complex processing. They may ask for thlngs that are outside the system's functional capabilities and, hence, its domain of language comprehension.In st-,~-ry, we feel that INTELLECT has effectively solved the man-machine communication problem for database retrieval, within its realm of applicability. We are now addressing the question of what business environments are best served by Engllsh-languaEe database retrieval while at the same time continuing our development by si~ificantly expanding INTELLECT's semantic, and hence its lin~uistlc, domain.
Appendix:
| null | null | null | null | {
"paperhash": [],
"title": [],
"abstract": [],
"authors": [],
"arxiv_id": [],
"s2_corpus_id": [],
"intents": [],
"isInfluential": []
} | null | 524 | 0.001908 | null | null | null | null | null | null | null | null |
ea4ae6831234a63f60a398e56a6257b0efb7c851 | 7271323 | null | Dynamic Strategy Selection in Flexible Parsing | Robust natural language interpretation requires strong semantic domain models, "fall-soff" recovery heuristics, and very flexible control structures. Although single-strategy parsers have met with a measure of success, a multi.strategy approach is shown to provide a much higher degree of flexibility, redundancy, and ability to bring task-specific domain knowledge (in addition to general linguistic knowledge) to bear on both grammatical and ungrammatical input. A parsing algorithm is presented that integrates several different parsing strategies, with case-frame instantiation dominating. Each of these parsing strategies exploits different types of knowledge; and their combination provides a strong framework in which to process conjunctions, fragmentary input, and ungrammatical structures, as well as less exotic, grammatically correct input. Several specific heuristics for handling ungrammatical input are presented within this multi-strategy framework. | {
"name": [
"Carbonell, Jaime G. and",
"Hayes, Philip J."
],
"affiliation": [
null,
null
]
} | null | null | 19th Annual Meeting of the Association for Computational Linguistics | 1981-06-01 | 20 | 48 | null | When people use language spontaneously, they o~ten do not respect grammatical niceties. Instead of producing sequences of grammatically well-formed and complete sentences, they often miss out or repeat words or phrases, break off what they are .saying and rephrase or replace it, speak in fragments, or use otherwise incorrect grammar. While other people generally have little trouble co'reprehending ungrammatical utterances, most' natural language computer systems are unable to process errorful input at all. Such inflexibility in parsing is a serious impediment to the use of natural language in interactive computer systems.Accordingly, we [6] and other researchers including Wemchedel and Black [14] , and Kwasny and Sondhelmer [9] , have attempted to produce flexible parsers, i.e. parsers that can accept ungrammatical input, correcting the errors whan possible, and generating several alternative interpretations if appropriate.While different in many ways, all these approaches to flexible parsing operate by applying a uniform parsing process to a uniformly represented grammar. Because of the linguistic performance problems involved, this uniform procedure cannot be as simple and elegant as the procedures followed by parsers based on a pure linguistic competence model, such as Parsifal [10] . Indeed, their parsing procedures may involve several strategies that are applied in a predetermined order when the input deviates from the grammar, but the choice of strategy never depends on the specific type of construction being parsed. In light of experience with our own flexible parser, we have come to believe that such uniformity is not conducive to good flexible parsing. Rather, the strategies used should be dynamically selected according to the type of construction being parsed. For instance, partial.linear pattern matching may be well suited to the flexible parsing of idiomatic phrases, or specialized noun phrases such as names, dates, or addresses (see also [5] ), but case constructions, such as noun phrases with trailing prepositional phrases, or imperative phrases, require case-oriented parsing strategies. The undedying principle is simple: The ap~rol~riate knowledge must be brought to bear at the right time --and it must not interfere at other times. Though the initial motivation for this approach sprang from the r~eeds of flexible parsing, such construction.specific techniques can provide important benefits even when no grammatical deviations are encountered, as we will show. This observation may be related to the current absence of any single universal parsing strategy capable of exploiting all knowledge sources (although ELI [12] and its offspring [2] are efforts in this direction).Our objective here is not to create the ultimate parser, but to build a very flexible and robust taak.oriented parser capable of exploiting all relevant domain knowledge as well as more general syntax and semantics. The initial application domain for the parser is the central component of an interface to various computer subsystems (or tools). This interface and, therefore the parser, should be adaptable to new tools by substituting domain-specific data bases (called "tool descriptions") that govern the behaviorof the interface, including the invocation of parsing strategies, dictionanes and concepts, rather than requiring any domain adaptations by the interface system itself.With these goals in mind, we proceed to give details of the kinds of difficulties that a uniform parsing strategy can lead to, and show how dynamically-selected construction.specific techniques can help. We list a number of such specific strategies, then we focus on our initial implementation of two of these strategies and the mechanism that dynamically selects between them while pm'alng task-oriented natural language imperative constructions. Imperatives were chosen largely because commands and queries given to a task-oriented natural language front end often take that form [6] . | null | Our present flexible parser, which we call RexP, is intended to parse correctly input that correaponds to a fixed grammar, and also to deal with input that deviates from that grammar by erring along certain classes of common ungrammaticalities. Because of these goals, the parser is based on the combination of two uniform parsing strategies: bottom-up parsing and pattern.matching. The choice of a bottom.up rather then a top-down strategy was based on our need to recognize isolated sentence fragments, rather than complete sentences, and to detect restarts and continuations after interjections. However, since completely bottom-up strategies lead to the consideration of an unnecessary number of alternatives in correct input, the algorithm used allowed some of the economies of top-dOwn parsing for non-deviant input. Technically speaking, this made the parser left-corner rather than bottom-up. We chose to use a grammar of linear patterns rather than, say, a transition network because pattern.matching meshes well with bottom-up parsing by allowing lookup of a pattern from the presence in the input of any of its constituents; because pattern-matching facilitates recognition of utterances with omissions and substitutions when patterns are recognized on the basis of partial matches; and because pattern. matching is necessary for the recognition of idiomatic phrases. More details of the iustifications for these choices can be found in [6] . FlexP has been tested extensively in conjunction with a gracefully interacting interface to an electronic mail system [1] . "Gracefully interacting" means that the interface appears friendly, supportive, and robust to its user. In particular, graceful interaction requires the system to tolerate minor input errors and typos, so a flexible parser is an imbortant component of such an interface. While FlexP performed this task adeduately, the experience turned up some problems related to the major theme of this paper. These problems are all derived from the incomparability between the uniform nature of The grammar representation and the kinds of flexible parsing strategies required to deal with the inherently non-uniform nature of some language constructions. In particular:.•Oifferent elements in the pattern of a single grammar rule can serve raclically different functions and/or exhibit different ease of recognition. Hence, an efficient parsing strategy should react to their apparent absence, for instance, in quite different ways.• The representation of a single unified construction at the language level may require several linear patterns at the grammar level, making it impossible to treat that construction • with the integrity required for adecluate flexible parsing.The second problem is directly related to the use of a pattern-matching grammar, but the first would arise with any uniformly represented grammar applied by a uniform parsing strategy.For our application, these problems manifested themselves most markedly by the presence of case constructions in the input language. Thus. our examples and solution methOds will be in terms of integrating case-frame instantiat=on with other parsing strategies. Consider, for example, the following noun phrase with a typical postnominal case frame:"the messages from Smith aDout ADA pragmas dated later than Saturday".The phrase has three cases marked by "from", "about", and "dated later than". This Wpe of phrase is actually used in FlexP's current grammar, and the basic pattern used to recognize descriptions of messages is:<?determiner eMassageAd,1 ~4essagoHoad •NOlsageC8$o)which says that a message description iS an optional (?) determiner. followed by an arbitrary number (') of message adjectives followed by a message head word (i.e. a word meaning "r~essage"). followed by an arbitrary number of message cases, in the example. "the" is the determiner, there are no message adjectives. "messages" is the message head word. and there are three message cases: "from Smith". • 'about ADA pragmas", end "dated later than". (~=cause each case has more than one component, each must be recognized by a separate pattern:<',Cf tom I~erson> <~'.abou t Subject> <~,s tnce Data> Here % means anything in the same word class, "dated later than", for instance, is eauivalent to "since" for this purpOSe.These patterns for message descr~tions illustrate the two problems mentioned above: the elementS of the .case patterns have radically different functions -The first elements are case markers, and the second elements are the actual subconcepts for the case. Since case indicators are typically much more restriCted in expression, and therefore much easier to recognize than Their corresponding subconc~ts, a plausible strategy for a parser that "knows" about case constructions is to scan input for the case indicators, and then parse the associated subconcepts top-down. This strategy is particularly valuable if one of the subconcepts is malformed or of uncertain form, such as the subject case in our example. Neither "ADA" nor "pragmas" is likely to be in the vocabulary of our system, so the only way the end of the subject field can be detected is by the presence of the case indicator "from" which follows iL However, the present parser cannot distinguish case indicators from case fillers -both are just elements in a pattern with exactly the same computational status, and hence it cannot use this strategy.The next section describes an algorithm for flexibly parsing case constructions. At the moment, the algorithm works only on a mixture of case constructions and linear patterns, but eventually we envisage a number of specific parsing algorithms, one for each of a number of construction types, all working together to provide a more complete flexible parser.Below, we list a number of the parsing strategies that we envisage might be used. Most of these strategies exploit the constrained task.oriented nature of the input language:• Case-Frame Instantiation is necessary to parse general imperative constructs and noun phrases with posThominal modifiers. This method has been applied before with some success to linguistic or conceptual cases [12] in more general parsing tasks. However, it becomes much more powerful and robust if domain-dependent constraints among the cases can be exploited. For instance, in a filemanagement system, the command "Transfer UPDATE.FOR to the accounts directory" can be easily parsed if the information in the unmarked case of transfer ("ulXlate.for" in our example) is parsed by a file-name expert, and the destination case (flagged by "to") is parsed not as a physical location, but a logical entity ins=de a machine. The latter constraint enables one to interpret "directory" not as a phonebook or bureaucratic agency, but as a reasonable destination for a file in a computer.• Semantic Grammars [8] prove useful when there are ways of hierarchically clustering domain concepts into functionally useful categories for user interaction. Semantic grammars, like case systems, can bring domain knowledge to bear in dissmbiguatmg word meaningS. However, the central problem of semantic grammars is non-transferability to other domains, stemming from the specificity of the semantic categorization hierarchy built into the grammar rules. This problem is somewhat ameliorated if this technique is applied only tO parsing selected individual phrases [13], rather than being res0onsible for the entire parse. Individual constituents, such as those recognizing the initial segment of factual queries, apply in may domains, whereas a constituent recognizing a clause about file transfer is totally domain specific. Of course, This restriction" calls for a different parsing strategy at the clause and sentence level.• (Partial) Pattern Matching on strings, using non.terminal semantic.grammar constituents in the patterns, proves to be an interesting generalization of semantic grammars. This method is particularly useful when the patterns and semantic grammar non-terminal nodes interleave in a hierarchical fashion.e Transformations to Canonical Form prove useful both for domain-dependent and domain.independent constructs. For instance, the following rule transforms possessives into "of" phrases, which we chose as canonical:['<ATTRZBUTE> tn possessive form. <VALUE> lagltfmate for attribute] ->[<VALUE> "OF" <ATTRZBUTE> In stipple forll]Hence, the parser need only consider "of" constructions ("file's destination" => "destinaUon of file"). These transforms simplify the pattern matcher and semantic grammar application process, especially when transformed constructions occur in many different contextS. e Target-specific methods may be invoked to portions of sentences not easdy handlecl by The more general methods. For instance, if a case-grammar determines that the case just s=gnaled is a proper name, a special nameexpert strategy may be called. This expe~ knows that nantes can contain unknown words (e.g., Mr. Joe Gallen D'Aguila is obviously a name with D'Aguila as the surname) but subject to ordering constraints and morphological preferences. When unknown words are encountered in other positions in a sentence, the parser may try morphological decomposition, spelling correction, querying the user, or more complex processes to induce the probable meaning of unknown words, such as the project-and-integrate technique described in [3] . Clearly these unknown.word strategies ought to be suppressed in parsing person names.to the suO-f~arser. A partial list of parsing strategies indicated by expected fillers is:• Sub-imperative --Case.frame parser, starting with the command-identification pattern match above.• Structured-object (e.g., a concept with subattributes) .-Case-frame parser, starting with the pattern-marcher invoked on the list of patterns corresponding to the names (or compound names) of the semantically permissible structured objects, followed by case-frame parsing of any present subattributes.• Simple Object .-Apply the pattern matcher, using only the patterns indexed as relevant in the case-fillerinformation field.Special Object --Apply the .parsing strategy applicable to that type of special object (e.g., proper names, dates, quoted strings, stylized technical jargon, etc...)None of the above --(Errorful input or parser deficiency) Apply the graceful recovery techniques discussed below. (Completed frames typically re~de at the top of the stack.)8. If there is more than One case frame on the stack when trying to parse additional inpuL apply the following procedure:• If the input only matches a case marker in one frame, proceed to instantiste the corresponding case-filler as outlined above. Also, if the matched c8~e marker is not on the most embedded case frame (i.e., at the top of the context stack), pop the stack until the frame whose case marker was matched appears at the top of the stack.• If no case markers are matched, attempt to parse unmarked cases, starting with the most deeoly embedded case frame (the top of the context stack) and proceeding outwards. If one is matched, pop the context stack until the corresponding case frame is at the top. Then, instantiats the case filler, remove the case from the active case frame, and proceed tO parse additional input. If more then one unmarked case matches the input, choose the most embedded one (i.e., the most recent context) and save the stats of the parse on the global history stack. (This soggeat '= an ambiguity that cannot be resolved with the information at hand.)• If the input matches more than one case marker in the context stack, try to parse the case filler via the indexed parsing strategy for each filler.information slot corresponding to a matched case marker. If more then one case filler parses (this is somewhat rare sJtustionindicating underconstrained case frames or truly ambiguous input) save the stats in the global history stack arid pursue the parse assuming the mOst deeply embeded constituent, [Our case.frame attachment heuristic favors the most }ocal attachment permitted by semantic case constraints.]g. If a conjunction or disjunction occurs in the input, cycle through the context stack trying to parse the right-hand side of the conjunction as filling the same case as the left hand side. If no such parse is feasible, interpret the conjunction as top-level, e.g, as two instances of the same imperative, or two different imperatives, ff more than one parse results, interact with the user to disaml~iguate. To illustrate this simple process, consider."Transfer the programs written by Smith and Jones to ...""Transfer the programs written in Fortran and the census data files to ...""Transfer the prOgrams written in Fortran and delete ..."The scope of the first conjunction is the "author" subattribute of program, whereas the scope of the second coniunction is the unmarked "obieot" case of the thrustor action. Domain knowledge in the case-filler information of the "ob)ect" case in the "transfer" imperative inhibits "Jones" from matching a potential object for electronic file transfer, Similarly "Census data files" are inhibited from matching the "author" subattribute of a prOgram. Thus conjunctions in the two syntactically comparable examples are scoped differently by our semantic-scoping rule relying on domain-specific case information. "Delete" matches no active case filler, and hence it is parsed as the initial Segment Of a second conjoined utterance. Since "delete" is a known imperative, this parse succeeds.10. [7) .]The need for embeded case structures and ambiguity resolution based on domain-dependent semantic expectations of the case fillers is illustrated by the following paJr of sentences:"Edit the Drograms in Forlran" "Edit the programs in Teco" "Fortran" fills the language attribute of "prOgram", but cannot fill either the location or instrument case of Edit (both of which can be signa~d by "in"). In the second sentence, however, "Teed" fills the instrument case of the veYO "edit" and none of the attributes of "program". This disembiguation is significant because in the first example the user specified which programs (s)he wants to edit, whereas in the second example (s)he specified how (s)he wants to edit them.The algorithm Drseented is sufficient to parse grammatical input. In addition, since it oper-,tes in a manner specifically tailored to case constructions, it is easy to add medifications dealing with deviant input. Currently, the algorithm includes the following steps that deal with ungrammaticality:12. If step 4 fails. Le. a filler of appropriate type cannot be parsed at that position in the inDut, then repeat step 3 at successive points in the input until it produces'a match, and continue the regular algorithm from there. Save all words not matched on a SKIPPED list. This step tal~es advantage of the fact that case markers are often much easier to recognize than case fillers to realign the parser if it gets out of step with the input (because of unexpected interjections, or other spurious or missing won:is).13. It wor(ls are on SKIPPED at the end of the parse, and cases remain unfilled in the case frames that were on the context Mack at the time the words were skipped, then try tO parse each of the case fillers against successive positions of the skipped sequences. This step picks up cases for which the masker was incorrect or gadoled.To summarize, uniform i~mng procedures applied to uniform grammars are less than adeduate for paring ungrammatical inpuL As our experience with such an approach s~ows, the uniform methods are unable to take full advantage of domain knowledge, differing structurW roles (e.g,, case markers and. case fillers), and relative eese of identification among the various constituents in different types of constrl, ctions. Instead, we advocate integrating a number of different parSing strategies tailored to each type of construction as dictated by the ¢oplication domain. The parser should dynamically select parsing strategies according to what type of construction it expects in the course of the parse. We described a simple algorithm designed along these lines that makes dynamic choices between two parsing strategies, one designed for case constructions and the other for linear patterns. While this dynamic selection coproach was suggested by the needs of flexible parSing, it also seemed to give our trial implementation significant efficiency advantages over single-strategy approaches for grammatical input. | null | As part of our investigations in tosk-oriented parsing, we have implemented (in edditio,n to FlexP) a pure case-frame parser exploiting domain-specific case constraints stored in a declarative data structure, and a combination pattern-match, semantic grammar, canonicaltransform parser, All three parsers have exhibited a measure of success, but more interestingly, the strengths of one method appear to overlap with the weaknesses of a different method. Hence, we are working towards a single parser that dynamically selects its parsing strategy to suit the task demands.Our new parser is designed primarily for task domains where the prevalent forms of user input are commands and queries, both expressed in imperative or pseudo-imperative constructs. Since in imperative constructs the initial word (or phrase), establishes the case.frame for the entire utterance, we chose the case-frame parsing strategy as priman/. In order to recognize an imperative command, and to instantiate each case, other parsing strategies are invoked. Since the parser knows what can fill.a particular case, it can choosethe parsing strategy best suited for linguistic constructions expressing that type of information. Moreover, it can pass any global constraints from the case frame or from other instantiated cases to the subsidiary parsers . thus reducing potential ambiguity, speeding the parse, and enhancing robustness.Consider our multi-strategy parsing algorithm as described below. Input is assumed to be in the imperative form: MATCH system descriOecl above." If no match occurs, assume the input corresponds to the unmarked case (or the first unmarked case, if more than one is present), and proceed to the next step.relax the pstlern matching procedures involved.15. If this still does not account for all the input, interact with the user by asking cluestions focussed on the uninterprsted Dart of the input. The same focussed interaction techniclue (discussed in [7] ) is used to resolve semantic ambiguities in the inpuL 16. If user intersction proves impractical, apply the project-andintegrate method [3] to narrow down the meanings of unknown words by exploiting syntactic, semantic and contextual cues.These flexible paring steps rely on the construction-specific 8SDe¢~ of the basic algorithm, and would not be easy to emulate in either a syntactic ATN parser or one based on a gum semantic gnlmmer.A further advantage of our rnixed.stnl~ approach is that the top. level case structure, in es~mce, partitions the semantic world dynamically into categories according to the semanbc constraints On the active case fillers. Thus, when a pattern matcfler is invoked to parle the recipient case of a file-transfer case frlmle, it need Only consider I::~terns (and semantc.gramrnm" constructs) that correspond to logical locations insole a computer. This form Of eXl~"ts~n-drMm I~u~ing in restricted domains adds a two-fold effect to its rcbusmes¢• Many smmous parses are .ever generatod (bemnmo patterns yielding petentisfly spurious matches are never in inappropriate contexts,)• Additional knowledge (such as additional ~ grammar rules, etc.) can be added without a corresponding linear inc~ in parso time since the coes.frames focus only upon the relevant sul3sat of patterns and rules. Th. Ink the efficiency of the system may actually inormme with the addition of more domain knowledge (in effect shebang the case fnmmes to further rssmct comext). Thle pehm~ior ~ it Do.ibis to incrementally build the ~ wWtout the everpresent fesr theta new extension may mal~ ltm entire pemer fail due to 8n unexl:)ected application of that extension in the wrong context.In closing, we note that the algorithm ~ above does not mer~ion interaction with morphotogicai de¢ompoaltion or 81:XMllng correction. LexicaJ processing is particularly important for robust Parsing; indeed, based On our limited eXl::~rienca, lexicaJ-level errcra m'e a significant source of deviant input. The recognition and handling of lexical-deviation phenomena, such as abbreviations and mies~Hlings, must be integrated with the more usual morDhotogical analySbl. Some of these topics are discussed indeoendently in [6] , However, intl.'prig resilient morDhologicaJ analysis with the algorithm we have outlined is a problem we consider very important and urgent if we are to construct • practical flexible parser. | Main paper:
introduction:
When people use language spontaneously, they o~ten do not respect grammatical niceties. Instead of producing sequences of grammatically well-formed and complete sentences, they often miss out or repeat words or phrases, break off what they are .saying and rephrase or replace it, speak in fragments, or use otherwise incorrect grammar. While other people generally have little trouble co'reprehending ungrammatical utterances, most' natural language computer systems are unable to process errorful input at all. Such inflexibility in parsing is a serious impediment to the use of natural language in interactive computer systems.Accordingly, we [6] and other researchers including Wemchedel and Black [14] , and Kwasny and Sondhelmer [9] , have attempted to produce flexible parsers, i.e. parsers that can accept ungrammatical input, correcting the errors whan possible, and generating several alternative interpretations if appropriate.While different in many ways, all these approaches to flexible parsing operate by applying a uniform parsing process to a uniformly represented grammar. Because of the linguistic performance problems involved, this uniform procedure cannot be as simple and elegant as the procedures followed by parsers based on a pure linguistic competence model, such as Parsifal [10] . Indeed, their parsing procedures may involve several strategies that are applied in a predetermined order when the input deviates from the grammar, but the choice of strategy never depends on the specific type of construction being parsed. In light of experience with our own flexible parser, we have come to believe that such uniformity is not conducive to good flexible parsing. Rather, the strategies used should be dynamically selected according to the type of construction being parsed. For instance, partial.linear pattern matching may be well suited to the flexible parsing of idiomatic phrases, or specialized noun phrases such as names, dates, or addresses (see also [5] ), but case constructions, such as noun phrases with trailing prepositional phrases, or imperative phrases, require case-oriented parsing strategies. The undedying principle is simple: The ap~rol~riate knowledge must be brought to bear at the right time --and it must not interfere at other times. Though the initial motivation for this approach sprang from the r~eeds of flexible parsing, such construction.specific techniques can provide important benefits even when no grammatical deviations are encountered, as we will show. This observation may be related to the current absence of any single universal parsing strategy capable of exploiting all knowledge sources (although ELI [12] and its offspring [2] are efforts in this direction).Our objective here is not to create the ultimate parser, but to build a very flexible and robust taak.oriented parser capable of exploiting all relevant domain knowledge as well as more general syntax and semantics. The initial application domain for the parser is the central component of an interface to various computer subsystems (or tools). This interface and, therefore the parser, should be adaptable to new tools by substituting domain-specific data bases (called "tool descriptions") that govern the behaviorof the interface, including the invocation of parsing strategies, dictionanes and concepts, rather than requiring any domain adaptations by the interface system itself.With these goals in mind, we proceed to give details of the kinds of difficulties that a uniform parsing strategy can lead to, and show how dynamically-selected construction.specific techniques can help. We list a number of such specific strategies, then we focus on our initial implementation of two of these strategies and the mechanism that dynamically selects between them while pm'alng task-oriented natural language imperative constructions. Imperatives were chosen largely because commands and queries given to a task-oriented natural language front end often take that form [6] .
problems with a uniform parsing strategy:
Our present flexible parser, which we call RexP, is intended to parse correctly input that correaponds to a fixed grammar, and also to deal with input that deviates from that grammar by erring along certain classes of common ungrammaticalities. Because of these goals, the parser is based on the combination of two uniform parsing strategies: bottom-up parsing and pattern.matching. The choice of a bottom.up rather then a top-down strategy was based on our need to recognize isolated sentence fragments, rather than complete sentences, and to detect restarts and continuations after interjections. However, since completely bottom-up strategies lead to the consideration of an unnecessary number of alternatives in correct input, the algorithm used allowed some of the economies of top-dOwn parsing for non-deviant input. Technically speaking, this made the parser left-corner rather than bottom-up. We chose to use a grammar of linear patterns rather than, say, a transition network because pattern.matching meshes well with bottom-up parsing by allowing lookup of a pattern from the presence in the input of any of its constituents; because pattern-matching facilitates recognition of utterances with omissions and substitutions when patterns are recognized on the basis of partial matches; and because pattern. matching is necessary for the recognition of idiomatic phrases. More details of the iustifications for these choices can be found in [6] . FlexP has been tested extensively in conjunction with a gracefully interacting interface to an electronic mail system [1] . "Gracefully interacting" means that the interface appears friendly, supportive, and robust to its user. In particular, graceful interaction requires the system to tolerate minor input errors and typos, so a flexible parser is an imbortant component of such an interface. While FlexP performed this task adeduately, the experience turned up some problems related to the major theme of this paper. These problems are all derived from the incomparability between the uniform nature of The grammar representation and the kinds of flexible parsing strategies required to deal with the inherently non-uniform nature of some language constructions. In particular:.•Oifferent elements in the pattern of a single grammar rule can serve raclically different functions and/or exhibit different ease of recognition. Hence, an efficient parsing strategy should react to their apparent absence, for instance, in quite different ways.• The representation of a single unified construction at the language level may require several linear patterns at the grammar level, making it impossible to treat that construction • with the integrity required for adecluate flexible parsing.The second problem is directly related to the use of a pattern-matching grammar, but the first would arise with any uniformly represented grammar applied by a uniform parsing strategy.For our application, these problems manifested themselves most markedly by the presence of case constructions in the input language. Thus. our examples and solution methOds will be in terms of integrating case-frame instantiat=on with other parsing strategies. Consider, for example, the following noun phrase with a typical postnominal case frame:"the messages from Smith aDout ADA pragmas dated later than Saturday".The phrase has three cases marked by "from", "about", and "dated later than". This Wpe of phrase is actually used in FlexP's current grammar, and the basic pattern used to recognize descriptions of messages is:<?determiner eMassageAd,1 ~4essagoHoad •NOlsageC8$o)which says that a message description iS an optional (?) determiner. followed by an arbitrary number (') of message adjectives followed by a message head word (i.e. a word meaning "r~essage"). followed by an arbitrary number of message cases, in the example. "the" is the determiner, there are no message adjectives. "messages" is the message head word. and there are three message cases: "from Smith". • 'about ADA pragmas", end "dated later than". (~=cause each case has more than one component, each must be recognized by a separate pattern:<',Cf tom I~erson> <~'.abou t Subject> <~,s tnce Data> Here % means anything in the same word class, "dated later than", for instance, is eauivalent to "since" for this purpOSe.These patterns for message descr~tions illustrate the two problems mentioned above: the elementS of the .case patterns have radically different functions -The first elements are case markers, and the second elements are the actual subconcepts for the case. Since case indicators are typically much more restriCted in expression, and therefore much easier to recognize than Their corresponding subconc~ts, a plausible strategy for a parser that "knows" about case constructions is to scan input for the case indicators, and then parse the associated subconcepts top-down. This strategy is particularly valuable if one of the subconcepts is malformed or of uncertain form, such as the subject case in our example. Neither "ADA" nor "pragmas" is likely to be in the vocabulary of our system, so the only way the end of the subject field can be detected is by the presence of the case indicator "from" which follows iL However, the present parser cannot distinguish case indicators from case fillers -both are just elements in a pattern with exactly the same computational status, and hence it cannot use this strategy.The next section describes an algorithm for flexibly parsing case constructions. At the moment, the algorithm works only on a mixture of case constructions and linear patterns, but eventually we envisage a number of specific parsing algorithms, one for each of a number of construction types, all working together to provide a more complete flexible parser.Below, we list a number of the parsing strategies that we envisage might be used. Most of these strategies exploit the constrained task.oriented nature of the input language:• Case-Frame Instantiation is necessary to parse general imperative constructs and noun phrases with posThominal modifiers. This method has been applied before with some success to linguistic or conceptual cases [12] in more general parsing tasks. However, it becomes much more powerful and robust if domain-dependent constraints among the cases can be exploited. For instance, in a filemanagement system, the command "Transfer UPDATE.FOR to the accounts directory" can be easily parsed if the information in the unmarked case of transfer ("ulXlate.for" in our example) is parsed by a file-name expert, and the destination case (flagged by "to") is parsed not as a physical location, but a logical entity ins=de a machine. The latter constraint enables one to interpret "directory" not as a phonebook or bureaucratic agency, but as a reasonable destination for a file in a computer.• Semantic Grammars [8] prove useful when there are ways of hierarchically clustering domain concepts into functionally useful categories for user interaction. Semantic grammars, like case systems, can bring domain knowledge to bear in dissmbiguatmg word meaningS. However, the central problem of semantic grammars is non-transferability to other domains, stemming from the specificity of the semantic categorization hierarchy built into the grammar rules. This problem is somewhat ameliorated if this technique is applied only tO parsing selected individual phrases [13], rather than being res0onsible for the entire parse. Individual constituents, such as those recognizing the initial segment of factual queries, apply in may domains, whereas a constituent recognizing a clause about file transfer is totally domain specific. Of course, This restriction" calls for a different parsing strategy at the clause and sentence level.• (Partial) Pattern Matching on strings, using non.terminal semantic.grammar constituents in the patterns, proves to be an interesting generalization of semantic grammars. This method is particularly useful when the patterns and semantic grammar non-terminal nodes interleave in a hierarchical fashion.e Transformations to Canonical Form prove useful both for domain-dependent and domain.independent constructs. For instance, the following rule transforms possessives into "of" phrases, which we chose as canonical:['<ATTRZBUTE> tn possessive form. <VALUE> lagltfmate for attribute] ->[<VALUE> "OF" <ATTRZBUTE> In stipple forll]Hence, the parser need only consider "of" constructions ("file's destination" => "destinaUon of file"). These transforms simplify the pattern matcher and semantic grammar application process, especially when transformed constructions occur in many different contextS. e Target-specific methods may be invoked to portions of sentences not easdy handlecl by The more general methods. For instance, if a case-grammar determines that the case just s=gnaled is a proper name, a special nameexpert strategy may be called. This expe~ knows that nantes can contain unknown words (e.g., Mr. Joe Gallen D'Aguila is obviously a name with D'Aguila as the surname) but subject to ordering constraints and morphological preferences. When unknown words are encountered in other positions in a sentence, the parser may try morphological decomposition, spelling correction, querying the user, or more complex processes to induce the probable meaning of unknown words, such as the project-and-integrate technique described in [3] . Clearly these unknown.word strategies ought to be suppressed in parsing person names.
a case-oriented parsing strategy:
As part of our investigations in tosk-oriented parsing, we have implemented (in edditio,n to FlexP) a pure case-frame parser exploiting domain-specific case constraints stored in a declarative data structure, and a combination pattern-match, semantic grammar, canonicaltransform parser, All three parsers have exhibited a measure of success, but more interestingly, the strengths of one method appear to overlap with the weaknesses of a different method. Hence, we are working towards a single parser that dynamically selects its parsing strategy to suit the task demands.Our new parser is designed primarily for task domains where the prevalent forms of user input are commands and queries, both expressed in imperative or pseudo-imperative constructs. Since in imperative constructs the initial word (or phrase), establishes the case.frame for the entire utterance, we chose the case-frame parsing strategy as priman/. In order to recognize an imperative command, and to instantiate each case, other parsing strategies are invoked. Since the parser knows what can fill.a particular case, it can choosethe parsing strategy best suited for linguistic constructions expressing that type of information. Moreover, it can pass any global constraints from the case frame or from other instantiated cases to the subsidiary parsers . thus reducing potential ambiguity, speeding the parse, and enhancing robustness.Consider our multi-strategy parsing algorithm as described below. Input is assumed to be in the imperative form: MATCH system descriOecl above." If no match occurs, assume the input corresponds to the unmarked case (or the first unmarked case, if more than one is present), and proceed to the next step.
apply the darsin(7 strategy indicated by the type of construct expected as a case filler. pass any available case constraints:
to the suO-f~arser. A partial list of parsing strategies indicated by expected fillers is:• Sub-imperative --Case.frame parser, starting with the command-identification pattern match above.• Structured-object (e.g., a concept with subattributes) .-Case-frame parser, starting with the pattern-marcher invoked on the list of patterns corresponding to the names (or compound names) of the semantically permissible structured objects, followed by case-frame parsing of any present subattributes.• Simple Object .-Apply the pattern matcher, using only the patterns indexed as relevant in the case-fillerinformation field.Special Object --Apply the .parsing strategy applicable to that type of special object (e.g., proper names, dates, quoted strings, stylized technical jargon, etc...)None of the above --(Errorful input or parser deficiency) Apply the graceful recovery techniques discussed below. (Completed frames typically re~de at the top of the stack.)8. If there is more than One case frame on the stack when trying to parse additional inpuL apply the following procedure:• If the input only matches a case marker in one frame, proceed to instantiste the corresponding case-filler as outlined above. Also, if the matched c8~e marker is not on the most embedded case frame (i.e., at the top of the context stack), pop the stack until the frame whose case marker was matched appears at the top of the stack.• If no case markers are matched, attempt to parse unmarked cases, starting with the most deeoly embedded case frame (the top of the context stack) and proceeding outwards. If one is matched, pop the context stack until the corresponding case frame is at the top. Then, instantiats the case filler, remove the case from the active case frame, and proceed tO parse additional input. If more then one unmarked case matches the input, choose the most embedded one (i.e., the most recent context) and save the stats of the parse on the global history stack. (This soggeat '= an ambiguity that cannot be resolved with the information at hand.)• If the input matches more than one case marker in the context stack, try to parse the case filler via the indexed parsing strategy for each filler.information slot corresponding to a matched case marker. If more then one case filler parses (this is somewhat rare sJtustionindicating underconstrained case frames or truly ambiguous input) save the stats in the global history stack arid pursue the parse assuming the mOst deeply embeded constituent, [Our case.frame attachment heuristic favors the most }ocal attachment permitted by semantic case constraints.]g. If a conjunction or disjunction occurs in the input, cycle through the context stack trying to parse the right-hand side of the conjunction as filling the same case as the left hand side. If no such parse is feasible, interpret the conjunction as top-level, e.g, as two instances of the same imperative, or two different imperatives, ff more than one parse results, interact with the user to disaml~iguate. To illustrate this simple process, consider."Transfer the programs written by Smith and Jones to ...""Transfer the programs written in Fortran and the census data files to ...""Transfer the prOgrams written in Fortran and delete ..."The scope of the first conjunction is the "author" subattribute of program, whereas the scope of the second coniunction is the unmarked "obieot" case of the thrustor action. Domain knowledge in the case-filler information of the "ob)ect" case in the "transfer" imperative inhibits "Jones" from matching a potential object for electronic file transfer, Similarly "Census data files" are inhibited from matching the "author" subattribute of a prOgram. Thus conjunctions in the two syntactically comparable examples are scoped differently by our semantic-scoping rule relying on domain-specific case information. "Delete" matches no active case filler, and hence it is parsed as the initial Segment Of a second conjoined utterance. Since "delete" is a known imperative, this parse succeeds.10. [7) .]The need for embeded case structures and ambiguity resolution based on domain-dependent semantic expectations of the case fillers is illustrated by the following paJr of sentences:"Edit the Drograms in Forlran" "Edit the programs in Teco" "Fortran" fills the language attribute of "prOgram", but cannot fill either the location or instrument case of Edit (both of which can be signa~d by "in"). In the second sentence, however, "Teed" fills the instrument case of the veYO "edit" and none of the attributes of "program". This disembiguation is significant because in the first example the user specified which programs (s)he wants to edit, whereas in the second example (s)he specified how (s)he wants to edit them.The algorithm Drseented is sufficient to parse grammatical input. In addition, since it oper-,tes in a manner specifically tailored to case constructions, it is easy to add medifications dealing with deviant input. Currently, the algorithm includes the following steps that deal with ungrammaticality:12. If step 4 fails. Le. a filler of appropriate type cannot be parsed at that position in the inDut, then repeat step 3 at successive points in the input until it produces'a match, and continue the regular algorithm from there. Save all words not matched on a SKIPPED list. This step tal~es advantage of the fact that case markers are often much easier to recognize than case fillers to realign the parser if it gets out of step with the input (because of unexpected interjections, or other spurious or missing won:is).13. It wor(ls are on SKIPPED at the end of the parse, and cases remain unfilled in the case frames that were on the context Mack at the time the words were skipped, then try tO parse each of the case fillers against successive positions of the skipped sequences. This step picks up cases for which the masker was incorrect or gadoled.To summarize, uniform i~mng procedures applied to uniform grammars are less than adeduate for paring ungrammatical inpuL As our experience with such an approach s~ows, the uniform methods are unable to take full advantage of domain knowledge, differing structurW roles (e.g,, case markers and. case fillers), and relative eese of identification among the various constituents in different types of constrl, ctions. Instead, we advocate integrating a number of different parSing strategies tailored to each type of construction as dictated by the ¢oplication domain. The parser should dynamically select parsing strategies according to what type of construction it expects in the course of the parse. We described a simple algorithm designed along these lines that makes dynamic choices between two parsing strategies, one designed for case constructions and the other for linear patterns. While this dynamic selection coproach was suggested by the needs of flexible parSing, it also seemed to give our trial implementation significant efficiency advantages over single-strategy approaches for grammatical input.
if woros are mill on skipped attempt the same matches, but:
relax the pstlern matching procedures involved.15. If this still does not account for all the input, interact with the user by asking cluestions focussed on the uninterprsted Dart of the input. The same focussed interaction techniclue (discussed in [7] ) is used to resolve semantic ambiguities in the inpuL 16. If user intersction proves impractical, apply the project-andintegrate method [3] to narrow down the meanings of unknown words by exploiting syntactic, semantic and contextual cues.These flexible paring steps rely on the construction-specific 8SDe¢~ of the basic algorithm, and would not be easy to emulate in either a syntactic ATN parser or one based on a gum semantic gnlmmer.A further advantage of our rnixed.stnl~ approach is that the top. level case structure, in es~mce, partitions the semantic world dynamically into categories according to the semanbc constraints On the active case fillers. Thus, when a pattern matcfler is invoked to parle the recipient case of a file-transfer case frlmle, it need Only consider I::~terns (and semantc.gramrnm" constructs) that correspond to logical locations insole a computer. This form Of eXl~"ts~n-drMm I~u~ing in restricted domains adds a two-fold effect to its rcbusmes¢• Many smmous parses are .ever generatod (bemnmo patterns yielding petentisfly spurious matches are never in inappropriate contexts,)• Additional knowledge (such as additional ~ grammar rules, etc.) can be added without a corresponding linear inc~ in parso time since the coes.frames focus only upon the relevant sul3sat of patterns and rules. Th. Ink the efficiency of the system may actually inormme with the addition of more domain knowledge (in effect shebang the case fnmmes to further rssmct comext). Thle pehm~ior ~ it Do.ibis to incrementally build the ~ wWtout the everpresent fesr theta new extension may mal~ ltm entire pemer fail due to 8n unexl:)ected application of that extension in the wrong context.In closing, we note that the algorithm ~ above does not mer~ion interaction with morphotogicai de¢ompoaltion or 81:XMllng correction. LexicaJ processing is particularly important for robust Parsing; indeed, based On our limited eXl::~rienca, lexicaJ-level errcra m'e a significant source of deviant input. The recognition and handling of lexical-deviation phenomena, such as abbreviations and mies~Hlings, must be integrated with the more usual morDhotogical analySbl. Some of these topics are discussed indeoendently in [6] , However, intl.'prig resilient morDhologicaJ analysis with the algorithm we have outlined is a problem we consider very important and urgent if we are to construct • practical flexible parser.
Appendix:
| null | null | null | null | {
"paperhash": [
"carbonell|delta-min:_a_search-control_method_for_information-gathering_problems",
"hayes|flexible_parsing",
"marcus|a_theory_of_syntactic_recognition_for_natural_language",
"carbonell|towards_a_self-extending_parser",
"kwasny|ungrammaticality_and_extra-grammaticality_in_natural_language_understanding_systems",
"hendrix|developing_a_natural_language_interface_to_complex_data",
"waltz|writing_a_natural_language_data_base_system",
"riesbeck|comprehension_by_computer_:_expectation-based_analysis_of_sentences_in_context",
"gershman|knowledge-based_parsing."
],
"title": [
"DELTA-MIN: A Search-Control Method for Information-Gathering Problems",
"Flexible Parsing",
"A theory of syntactic recognition for natural language",
"Towards a Self-Extending Parser",
"Ungrammaticality and Extra-Grammaticality in Natural Language Understanding Systems",
"Developing a natural language interface to complex data",
"Writing a Natural Language Data Base System",
"Comprehension by computer : expectation-based analysis of sentences in context",
"Knowledge-based parsing."
],
"abstract": [
"The Δ-MIN method consists of a best-first backtracking algorithm applicable to a large class of information-gathering problems, such as most natural language analyzers, many speech understanding systems, and some forms of planning and automated knowledge acquisition. This paper focuses on the general Δ-MIN search-control method and characterizes the problem spaces to which it may apply. Essentially, Δ-MIN provides a best-first search mechanism over the space of alternate interpretations of an input sequence, where the interpreter is assumed to be organized as a set of cooperating expert modules.",
"When people use natural language in natural settings, they often use it ungrammatically, missing out or repeating words, breaking-off and restarting, speaking in fragments, etc., Their human listeners are usually able to cope with these deviations with little difficulty. If a computer system wishes to accept natural language input from its users on a routine basis, it must display a similar indifference. In this paper, we outline a set of parsing flexibilities that such a system should provide. We go on to describe FlexP. a bottom-up pattern-matching parser that we have designed and implemented to provide these flexibilities for restricted natural language input to a limited-domain computer system.",
"Abstract : Assume that the syntax of natural language can be parsed by a left-to-right deterministic mechanism without facilities for parallelism or backup. It will be shown that this 'determinism' hypothesis, explored within the context of the grammar of English, leads to a simple mechanism, a grammar interpreter. (Author)",
"This paper discusses an approach to incremental learning in natural language processing. The technique of projecting and integrating semantic constraints to learn word definitions is analyzed as implemented in the POLITICS system. Extensions and improvements of this technique are developed. The problem of generalizing existing word meanings and understanding metaphorical uses of words is addressed in terms of semantic constraint integration.",
"Among the components included in Natural Language Understanding (NLU) systems is a grammar which spec i f i es much o f the l i n g u i s t i c s t ruc tu re o f the ut terances tha t can be expected. However, i t is ce r ta in tha t inputs that are ill-formed with respect to the grammar will be received, both because people regularly form ungra=cmatical utterances and because there are a variety of forms that cannot be readily included in current grammatical models and are hence \"extra-grammatical\". These might be rejected, but as Wilks stresses, \"...understanding requires, at the very least, ... some attempt to interpret, rather than merely reject, what seem to be ill-formed utterances.\" [WIL76]",
"Aspects of an intelligent interface that provides natural language access to a large body of data distributed over a computer network are described. The overall system architecture is presented, showing how a user is buffered from the actual database management systems (DBMSs) by three layers of insulating components. These layers operate in series to convert natural language queries into calls to DBMSs at remote sites. Attention is then focused on the first of the insulating components, the natural language system. A pragmatic approach to language access that has proved useful for building interfaces to databases is described and illustrated by examples. Special language features that increase system usability, such as spelling correction, processing of incomplete inputs, and run-time system personalization, are also discussed. The language system is contrasted with other work in applied natural language processing, and the system's limitations are analyzed.",
"We present a model for processing English requests for information from a relational data base. The model has as its main steps (a) locating semantic constituents of a request; (b) matching these constituents against larger templates called concept case frames; (c) filling in the concept case frame using information from the user's request, from the dialogue context and from the user's responses to questions posed by the system; and (d) generating a formal data base query using the collected information. Methods are suggested for constructing the components of such a natural language processing system for an arbitrary relational data base. The model has been applied to a large data base of aircraft flight and maintenance data to generate a system called PLANES; examples are drawn from this system.",
"Abstract : ELI (English Language Interpreter) is a natural language parsing program currently used by several story understanding systems. ELI differs from most other parsers in that it: produces meaning representations (using Schank's Conceptual Dependency system) rather than syntactic structures; uses syntactic information only when the meaning can not be obtained directly; talks to other programs that make high level inferences that tie individual events into coherent episodes; uses context-based exceptions (conceptual and syntactic) to control its parsing routines. Examples of texts that ELI has understood, and details of how it works are given.",
"Abstract : A model for knowledge-based natural language analysis is described. The model is applied to parsing English into Conceptual Dependency representations. The model processes sentences from left to right, one word at a time, using linguistic and non-linguistic knowledge to find the meaning of the input. It operates in three modes: structure-driven, position-driven, and situation-driven. The first two modes are expectation-based. In structure driven mode concepts underlying new input are expected to fill slots in the previously built conceptual structures. Noun groups are handled in position-driven mode which uses position-based pooling of expectations. When the first two modes fail to account for a new input, the parser goes into the third, situation-driven mode which tries to handle a situation by applying a series of appropriate experts. Four general kinds of knowledge are identified as necessary for language understanding: lexical knowledge, world knowledge, linguistic knowledge, and contextual knowledge."
],
"authors": [
{
"name": [
"J. Carbonell"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"P. Hayes",
"G. Mouradian"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"Mitchell P. Marcus"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"J. Carbonell"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"S. Kwasny",
"N. Sondheimer"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"G. Hendrix",
"E. Sacerdoti",
"Daniel Sagalowicz",
"Jonathan Slocum"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"D. Waltz",
"Bradley A. Goodman"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"C. Riesbeck",
"R. Schank"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"A. Gershman"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
}
]
}
],
"arxiv_id": [
null,
null,
null,
null,
null,
null,
null,
null,
null
],
"s2_corpus_id": [
"19028286",
"11007680",
"6616065",
"16742497",
"12695499",
"15391397",
"2983985",
"60546035",
"60724649"
],
"intents": [
[],
[],
[],
[
"background",
"methodology"
],
[],
[
"background"
],
[],
[
"background",
"methodology"
],
[]
],
"isInfluential": [
false,
false,
false,
false,
false,
false,
false,
false,
false
]
} | Problem: The paper addresses the need for robust natural language interpretation that can handle ungrammatical input and deviations from standard grammar rules in order to improve the performance of natural language computer systems.
Solution: The paper proposes a multi-strategy parsing approach that integrates different parsing strategies, with a focus on case-frame instantiation, to process conjunctions, fragmentary input, and ungrammatical structures effectively. This approach aims to bring task-specific domain knowledge to bear on both grammatical and ungrammatical input, providing flexibility, redundancy, and improved parsing capabilities. | 524 | 0.091603 | null | null | null | null | null | null | null | null |
8a93da3c69a01aec163a04162b353ccd5eff54e5 | 2608991 | null | The evolution of machine translation systems | The development of MT system design is described in four periods: the early experimental period , the period of large-scale research on 'direct translation' systems , the period after the ALPAC report in which the 'interlingual' and 'transfer' approaches were developed , and the current period in which interactive systems and 'artificial intelligence' approaches have appeared together with proposals for the multilingual system EUROTRA (since 1975). | {
"name": [
"Hutchins, W. John"
],
"affiliation": [
null
]
} | null | null | Translating and the Computer: Practical experience of machine translation | 1981-11-01 | 53 | 14 | null | The development of MT system design is described in four periods: the early experimental period , the period of large-scale research on 'direct translation' systems , the period after the ALPAC report in which the 'interlingual' and 'transfer' approaches were developed , and the current period in which interactive systems and 'artificial intelligence' approaches have appeared together with proposals for the multilingual system EUROTRA (since 1975).The evolution of machine translation has been influenced by many factors during a quarter century of research and development. In the early years the limitations of computer hardware and the inadequacies of programming languages were crucial elements, and they cannot be said to be trivial even now. Political and economic forces have influenced decisions about the languages to be translated from, the source languages as they are commonly called, and the languages to be translated into, the target languages. In the 1950's and 1960's concern in the United States about Soviet advances in science and technology encouraged massive funding of experimental Russian-English systems. Today the bicultural policy of Canada justifies support of English-French translation systems and the multilingual policy of the European Communities has led to sponsorship of research into a multilingual system. Other obviously important factors have been the intelligibility and readability of translations and the amount of 'post-editing' (or revising) considered necessary. This paper will concentrate, however, on the 'internal' evolution of machine translation, describing the various strategies or 'philosophies' which have been adopted at different times in the design of systems. It will be concerned only with systems producing fully translated texts; not, therefore, with systems providing aids for translators such as automatic dictionaries and terminology data banks. Only brief descriptions of major systems can be included -for fuller and more comprehensive treatments see Bruderer [l] and Hutchins [2] , where also more detailed bibliographies will be found; and for a fuller picture of the linguistic aspects of machine translation see Hutchins [3] . This account will also be restricted to systems in North America and Europe -for descriptions of research in the Soviet Union, which has evolved in much the same way, see Harper [4] , Locke [5] , Bar-Hillel [6] , Roberts and Zarechnak [7] , and other references in Hutchins [2] .Although there had been proposals for translation machines in the 1930's (see Zarechnak [8] for details), the real birth of machine translation came after the war with the general availability of the digital computer. From 1946 there were some simple experiments by Booth and Richens in Britain, mainly on automatic dictionaries, but it was the memorandum sent by Warren Weaver in 1949 [9] to some 200 of his acquaintances which launched machine translation as a scientific enterprise. Weaver had been impressed by the successful use of computers in breaking enemy codes during the war and suggested that translation could also be tackled as a decoding problem. He admitted that there were difficult semantic problems but mentioned the old idea of a 'universal language' as a possible intermediary between languages. Before long there were projects underway at many American universities. The early systems were invariably attempts to produce translations by taking the words of a text one at a time, looking them up in a bilingual dictionary, finding the equivalents in the target language and printing out the result in the same sequence as in the source text. If a word happened to have two or more possible translations, they were all printed. The method was obviously unsatisfactory and it was not long before attempts were made to rearrange the sequences of words, which meant that some kind of syntactic analysis was needed.In 1954 the research team at Georgetown University set up a public demonstration intended to show the technical feasibility of machine translation. With a vocabulary of just 250 Russian words, only six rules of grammar and a carefully selected sample of easy Russian sentences, the system demonstrated had no scientific value but, nevertheless, it encouraged the belief that translation by computer had been solved in principle and that the problems remaining were basically of an engineering nature [5, 8] . In the next ten years, research in the United States was supported on a massive scale -at 17 institutions to the tune of almost 20 million dollars, it has been estimated [7] -but the promised 'break-throughs' did not materialise, optimistic forecasts of commercial systems 'within five years' came to nothing, awareness of serious linguistic problems increased, and above all the translations produced were usually of very poor quality. In 1964 the National Science Foundation set up the Automatic Language Processing Advisory Committee (ALPAC) at the instigation of sponsors of machine translation. It reported in 1966 [10] that machine translation was slower, less accurate and twice as expensive as human translation and recommended no further investment. Research in the United States suffered immediate reductions and machine translation became no longer a 'respectable' scientific pursuit.Although the report was widely condemned as biased and shortsighted -see Locke [5] and Josselson [11] -its negative conclusions are not surprising when we look at the systems in operation or under development at the time. For example, the Mark II system installed in 1964 to produce Russian-English translations for the U.S. Air Force was only a slightly improved version of one of the earliest word-by-word systems (Kay [12] ). The translations required extensive 'post-editing' and were not rated highly.The general strategy employed in systems during this period until the mid-1960's was the 'direct translation' approach ( fig. 1 ): systems were designed in all details specifically for one pair of languages, nearly always, at this time, for Russian as the source language (SL) and English as the target language (TL). The basic assumption was that the vocabulary and syntax of SL texts should be analysed no more than necessary for the resolution of ambiguities, the identification of appropriate translations and the specification of the word order of TL texts. Syntactic analysis was designed to do little more than recognition of word classes (nouns, verbs, adjectives, etc.) in order to deal with homographs (e.g. control as verb or noun). Semantic analysis was rare, being restricted to the use of features such as 'male', 'concrete', 'liquid' etc. in cases where context could resolve ambiguities (e.g. foot cannot be 'animate' in the contexts foot of the hill and foot of the stairs.)A typical example is the Georgetown University system, which in fact proved to be one of the most successful using the 'direct' approach [12, 13] . In 1964 Russian-English systems were delivered to the U.S. Atomic Energy Commission and to Euratom in Italy; both were in regular operation until very recently. The Georgetown research team adopted what Garvin was later [14] to call the 'brute force' method of tackling problems: a program would be written for a particular text corpus, tested on another corpus, amended and improved, tested on a larger corpus, amended again, and so forth. The result was a monolithic program of intractable complexity, with no clear separation of those parts which analysed SL texts and those parts which produced TL texts. Syntactic analysis was rudimentary; there was no notion of grammatical rule or syntactic structure, even less of a 'theory' of language or translation. In addition, any information about the grammar of English or Russian which the program used was incorporated in the very structure of the program itself. Consequently modification of the system became progressively more and more difficult [12] . In fact, both the Georgetown systems remained unchanged after their installation in 1964.During this period linguistics had very little impact in practice on the design of machine translation systems. The tradition of Bloomfield which dominated American linguistics in the 1940's and 1950's concentrated on descriptive techniques and on problems of phonology and morphology; it had little interest in syntax or in semantics. Nevertheless, there were some researchers who developed methods of syntactic analysis based on explicit theoretical foundations. For example, Paul Garvin [14] developed his 'fulcrum' method which produced phrase structures indicating dependency relations between constituents, e.g. adjective to noun, noun to finite verb, noun to preposition (see fig. 2 ). The method was adopted in the Wayne State University project, which revealed its shortcomings; after ten years' work (1959) (1960) (1961) (1962) (1963) (1964) (1965) (1966) (1967) (1968) (1969) (1970) (1971) (1972) a very complex program was still unable to parse Russian sentences with more than one finite verb [15] . However, by this time Chomsky had already shown [16] why such syntactic models, in particular the equivalent and more familiar phrase structure version ( fig. 3 ), were in principle inadequate for the representation and description of the syntax of natural languages. Chomsky proposed the transformationalgenerative model which linked 'surface' phrase structures to 'deep' phrase structures by transformational rules.In a survey of machine translation in 1960 Bar-Hillel [6] did not doubt that methods of syntactic analysis could be greatly improved with the help of linguistic theory, but he expressed his conviction that semantic problems could never be completely resolved and that, therefore, high-quality translation by computer was impossible in principle.After the ALPAC report in 1966, research in machine translation continued for some time on a much reduced scale. Its goals had become more realistic; no longer were translations expected to be stylistically perfect, the aim was readability and fidelity to the original. On the other hand, there emerged a number of linguistically more advanced systems based on 'indirect' approaches to system design and there was a welcome increase in the variety of source and target languages.Research continued throughout on 'direct translation' systems. Two of them became fully operational systems during this period. The best known is SYSTRAN, designed initially as a Russian-English system and used in this form by the U.S. Air Force since 1970. Later it was adapted for English-French translation and this version was delivered in 1976 to the Commission of the European Communities. At various stages of development are further versions for French-English and English-Italian translation [17, 18] . SYSTRAN may be regarded as essentially a greatly improved descendant of the Georgetown 'direct translation' system. Linguistically there is little advance, but computationally the improvements are considerable. The main ones lie in the 'modularity' of its programming, allowing for the modification of any part of the processes to be undertaken without the risk of impairing overall efficiency, and in the strict separation of linguistic data and computational processes. It is therefore able to avoid many of the irresolvable complexities of the monolithic Georgetown system. 4 ). The Input program loads the text and the dictionaries, and checks each word against a High Frequency dictionary. Next the remaining words are sorted alphabetically and searched for in the Main Stem dictionary. Both dictionaries supply grammatical information, some semantic data and potential equivalents in the target language. The Analysis program makes seven 'passes' through each sentence: i) to resolve homographs, by examining the grammatical categories of adjacent words; ii) to look for compound nouns (e.g. blast furnace) in a Limited Semantics dictionary; iii) to identify phrase groups by looking for punctuation marks, conjunctions, relative pronouns, etc.; iv) to recognise primary syntactic relations such as congruence, government and apposition; v) to identify coordinate structures within phrases, e.g. conjoined adjectives or nouns modifying a noun; vi) to identify subjects and predicates; and vii) to recognise prepositional structures. The Transfer program has three parts: i) to look for words with idiomatic translations under certain conditions, e.g. agree if in the passive is translated as French convenir, otherwise as être d'accord; ii) to translate prepositions, using the semantic information assigned to words which govern them and which are governed by them; and iii) to resolve the remaining ambiguities, generally by tests specified in the dictionaries for particular words or expressions. The last stage Synthesis produces sentences in the target language from the equivalents indicated in the dictionaries, modifying verb forms and adjective endings as necessary, and finally rearranging the word order, e.g. changing an English adjective-noun sequence to a French noun-adjective sequence.Like its Georgetown ancestor, SYSTRAN is still basically a 'direct translation' system: programs of analysis and synthesis are designed for specific pairs of languages. However, in the course of time it has acquired features of a 'transfer' system, as we shall see below, in that the stages of Analysis, Transfer and Synthesis are clearly separated. In principle, the Analysis program of English in an English-French system can be adapted without extensive modification to serve as the Analysis program in an English-Italian system [20] . Likewise, the Synthesis programs are to some extent independent of particular source languages. Nevertheless, despite its 'modular' structure SYSTRAN remains a very complex system. The lack of explicit theoretical foundations and consistent methodology as far as linguistic processes are concerned gives many of its rules an ad hoc character. This is particularly apparent in the assignment of 'semantic features' to words and expressions in the dictionaries, as Pigott [21] has demonstrated.The other 'direct translation' system which became operational in this period was LOGOS, a system designed to translate American aircraft manuals into Vietnamese and said to be now in the process of adaptation for translating from English into French, Spanish and German [l] . Like SYSTRAN, its programs maintain a complete separation of the Analysis and Synthesis stages and so, although the procedures themselves are designed for a specific pair of languages, the programs are in principle adaptable for other pairs. In common with nearly all modern systems there is no confusion of programming processes and linguistic data and rules. But like SYSTRAN the linguistic foundations of the system are weak and inexplicit.By contrast, the systems which have adopted the 'indirect' approach have been greatly influenced by theories of linguistics. The possibility of translating via an intermediary 'universal' language had been suggested by Weaver in his memorandum [9] , but it was not until the 1960's that linguistics could offer any models to apply. The 'interlingual' approach to machine translation attracted two research teams in the early 1960's, at the University of Texas and at Grenoble University. In 'interlingual' systems translation is a two-stage process: from the source language into the interlingua and from the interlingua into the target language ( fig. 5 ). Programs of analysis and synthesis are completely independent, using separate dictionaries and grammars for the source and target languages. The systems are therefor designed so that further programs for additional languages can be incorporated without affecting the analysis and synthesis of languages already in the system.For the structure of an interlingua there was one obvious model at the time provided by Chomsky's theory of transformational grammar in its 1965 version [22] . It was argued that while languages differ greatly in 'surface' structures they share common 'deep structure' representations and that in any one language 'surface' forms which are equivalent in meaning (e.g. paraphrases) are derived from the same 'deep' structure. Consequently, 'deep structures' may be regarded as forms of 'universal' semantic representations. The Texas team adopted this model in a German-English system (METALS) intended to include other languages later [23] . Although they soon found that the Chomskyan conception of transformational rules would not work in a computer program of syntactic analysis -as did many others in computational linguistics (cf. Grishman [24] ) -they retained the basic transformational approach. The Analysis program in METALS was in three stages. On the basis of grammatical information from the source language dictionary, the program first produced several tentative 'strings' (sequences) of word-classes (nouns, verbs, etc.). The next stage examined each potential 'string' in turn and constructed for it possible phrase structure analyses; unacceptable strings were eliminated. In the third stage, semantic information from the dictionary was used to test the semantic coherence of the phrase structures (e.g. by testing for compatible semantic features of verbs and subjects). Then the acceptable phrase structures were converted into a 'deep structure' representation in which relationships between lexical items were given in terms of 'predicates' and 'arguments' ( fig. 6 gives an example of a METALS representation).An old man in a green suit looked at Mary's dog Figure 6 . METALS interlingual representation In the Grenoble system (CETA), designed for Russian-French translation [25] , the method of analysis was very similar in basic strategy. As in METALS, the first stage produced familiar 'surface' phrase structures, often more than one for a single sentence. But for 'deep structures' the Grenoble, team adopted the dependency model for representing relationships between lexical items ( fig. 7) . As in METALS, the representation is given in the propositional logical form of 'predicates' (verbs or adjectives) and their 'arguments' or 'actants' (nouns, noun phrases or other propositions). The linguistic model for CETA derives ultimately from Tesnière, but the team was much influenced by the Russian MT researcher Mel'chuk (for details see Hutchins [3] ).The formula explains the frequent appearance of the neutron.The generation of target language sentences from 'deep structure' representations was also designed on similar lines in the two systems. In the first stage of Synthesis lexical items of the source language were replaced by equivalents of the target language. Then, the resulting target language 'deep structure' was converted by a series of transformations using semantic, syntactic and morphological data provided by the target language dictionaries into 'surface' sentence forms.From this description it should be clear that neither system created a genuine interlingua; in both cases, the interlingua was restricted to syntactic structures; no attempt was made to decompose lexical items into semantic primitives, which would be necessary for interlingual semantic representations. The conversion of source language vocabulary into the target language was in both cases made through a bilingual dictionary of base forms of words or idioms. Consequently, some semantic equivalents could not be handled if there were different 'deep structures', e.g. He ignored and He took no notice of her, in METALS. In this respect, analysis did not go far enough. In other respects, however, it was found that analysis often went too far since it destroyed information about the 'surface' forms of source language texts which could have helped the generation of translated texts, e.g. information about which noun ('argument') was the subject, whether the verb was passive, and which clauses were subordinated. Even more serious perhaps was the rigidity of the processes: failure at one stage of analysis to identify components or to eliminate an incorrect parsing affected the performance of all subsequent stages. Too often, too many phrase structures were produced for each sentence: one common source of difficulty in English is the syntactic ambiguity of prepositional phrases, which can modify almost any preceding noun or verb. For example, on the table modifies put in The girl put the book on the table, but modifies book in The girl saw the book on the table; syntactic analysis alone cannot make the correct assignment, only semantic information (about the verbs put and see) can determine which phrase structure is acceptable (cf. figs. 2 and 3 ). The frequency of such syntactic indeterminacies results in the production of far too many phrase structures which are later found to be semantically incoherent. The CETA team concluded that what was needed was a more sensitive parser, one which could deal straightforwardly with simple sentences but which had access to a full battery of sophisticated analytical techniques to tackle more complex sentences.In retrospect, the 'interlingual' approach was perhaps too ambitious at that time: the more cautious 'transfer' approach was probably more realistic as well as being, as we shall see, flexible and adaptable in meeting the needs for different levels or 'depths' of syntactic and semantic analysis. In the 'transfer' approach both the source and target languages have their own particular 'deep structure' representations. Translation is thus a three-stage process ( fig. 8 ): Analysis of texts into source language representations, Transfer into target language representations, and Synthesis of texts in the target language. The goal of analysis is to produce representations which resolve the syntactic and lexical ambiguities of the language in question, without necessarily providing unique representations for synonymous constructions and expressions. No analysis is made of elements which might have more than one correspondent in target languages (e.g. English know and French connaître and savoir or German wissen and können). It is the task of Transfer components to convert unambiguous source language representations into the appropriate representations for a particular target language. This can involve restructuring to allow for different conditions attached to particular lexical elements, e.g. English remember is not a reflexive verb but its French equivalent souvenir is, and for differences in syntactic rules, e.g. English allows participle clauses as subjects (Making mistakes is easy) but French and German only infinitive clauses. The depth of syntactic analysis in 'transfer' systems is therefore in general much 'shallower' than more ambitious 'interlingual' systems which would attempt to formulate universal representations. Semantic analysis is also less ambitious, restricted primarily to resolution of homographs and tests of the semantic coherence of potential syntactic analyses. generates first an appropriate syntactic structure (given the constraints on lexical formations indicated by the French dictionary) and then produces the correct 'surface' morphological forms of verbs, adjectives and articles.Another example of a 'transfer' system is the Russian-German project at the University of Saarbrücken which began in 1967. The SUSY stages of analysis, transfer and synthesis [28, 29] have basic similarities to those of TAUM, with 'deep' representations also going no further initially than resolving ambiguities within the source language itself. However, problems with pronouns, complex verb groups and elision of nouns and verbs in Russian 'surface' forms demonstrated the necessity for 'deeper' analyses. Since about 1976, the transfer representations in SUSY have been more abstract, approximating more closely an 'interlingual' type of representation.Changes have also taken place in the TAUM representations in recent years. Experience on the Aviation project since 1977 has led to the introduction of partial semantic analysis in order to deal with the extremely complex noun phrases encountered in English technical manuals; thus, for example, the analysis of left engine fuel pump suction line would show ( fig. 11 ) functional (FUNCTION), locative (LOC), possessive (HAS) and object (OBJ) relations derived from semantic features supplied by the English dictionary [27] . Figure 11 . Semantic analysis in TAUMThese changes in TAUM and SUSY during the last five years or so have coincided with developments elsewhere which blur the previous clear typology of systems into 'direct', 'interlingual' and 'transfer'. At Grenoble there has been a fundamental rethinking of MT system design prompted by changes in computer facilities in 1971.The CETA system revealed disadvantages of reducing texts to semantic representations which eliminated useful 'surface' information. The new system GETA [30] is basically a 'transfer' system with stages of analysis, transfer and synthesis much as in TAUM and SUSY, but it retains the general form and 'depth' of the dependency-model representations of the previous Grenoble system. Although the ideal of interlingual representations is no longer the goal, it is intended that the 'deep structure' analyses should be of sufficient abstractness to permit transfer processes to be as straightforward as possible. These developments in GETA, TAUM and SUSY indicate there is now considerable agreement on the basic strategy, i.e. a 'transfer' system with some semantic analysis and some interlingual features in order to simplify transfer components. At the same time, even the 'direct translation' system SYSTRAN has acquired features of a 'transfer' approach in the separation of analysis, transfer and synthesis stages (cf. outlines of the TAUM and SYSTRAN systems in figs. 4 and 9) and in the consequently easier adaptability of SYSTRAN to new language pairs [20] .However, this apparent convergence of approaches in recent years is confined to the design of fully automatic systems dealing with uncontrolled text input and not involving any human intervention during the translation process itself. (The need for at least some human revision of translated texts from operational systems like SYSTRAN is a subsidiary process lying strictly outside the MT systems as such.) In the last five years or so there have appeared a number of 'limited language' systems and 'interactive' systems.One example of a system with limited syntax and semantics is METEO, developed by members of the Montreal team and using experience of TAUM, which has been translating English weather forecasts into French since 1976 [31] . Another is TITUS, which translates abstracts in the field of textile technology from and into English, French, German and Spanish. Abstracts are written in a standard regulated format, called the 'canonical documentation language', and translated via a simple code interlingua [32] . Such 'limited' systems are, of course, the practical application of what is common knowledge in the field, namely that systems can be more successful if the semantic range and syntactic complexity of texts to be translated can be specified. It is probably unrealistic to expect any MT system to deal with texts in all subjects; there are good practical reasons for providing topical glossaries, as in SYSTRAN, which can be selected as needed. There are possibilities that the selection of glossaries might be automated -there are pointers in the research at Saarbrücken on statistical techniques as aids in homograph resolution [28] and in research on 'sublanguages' by Kittredge [33] and others -but it could be argued that this is more easily and cheaply done by someone knowledgeable in the field concerned.The attractiveness of 'interactive' machine translation lies precisely in making the best use of both human translators and computers in fruitful collaboration. There are good arguments, practical and economic, for using the computer only for what it can do well, accessing large dictionaries, making morphological analyses and producing simple rough parsings, and for using human skills in the more complex processes of semantic analysis, resolving ambiguities and selecting the appropriate expression when there is a choice of possible translations. Interactive systems offer the realistic possibilities of high-quality translation -a prospect which is still distant in fully automatic systems. The best known interactive system is CULT, which has been producing English translations of Chinese mathematical texts since 1975. Also well known is the system at Brigham Young University (now known as ALPS) for translating English texts simultaneously into French, German, Spanish, Portuguese and eventually many other languages. And most recently of all, there is the appearance of the Weidner system. The first experimental system was MIND in the early 1970's [34] ; this was based on the 'transfer' approach, with the computer interrogating a human consultant during the analysis stage about problems with homographs or about ambiguities of syntax, e.g. the problem of prepositional phrases mentioned earlier. CULT is basically a 'direct translation' system [35] , involving human participation during analysis for the resolution of homographs and syntactic ambiguities and during synthesis for the insertion of English articles and the determination of verb tenses and moods. The Brigham Young system is 'interlingual' in approach [36] , with close human interaction during the analyses of English text into 'deep structure' representations (in 'junction grammar', a model with some affinities to Chomskyan grammars), but with as little as possible during synthesis processes.The Brigham Young system is regarded by its designers as a transitional system using human skills to overcome the problems of systems like GETA and TAUM until research in artificial intelligence has provided automatic methods. Researchers in machine translation have taken an increasing interest in the possibilities of artificial intelligence, particularly during the last five years or so. In 1960 Bar-Hillel [6] believed he had demonstrated the impossibility of high-quality machine translation when he argued that many semantic problems could be resolved only if computers have access to large encyclopaedias of general knowledge. (His particular example was the homograph pen in the simple sentence The box is in the pen. We know it refers to a container here and not to a writing instrument, but only because we know the size and form of the objects.) However, it is precisely problems of text understanding involving knowledge structures which have been the subject of much research in artificial intelligence (see Boden [37] for references). As yet, little attention has been paid directly to problems of translation, despite arguments that machine translation provides an objective testbed for AI theories (Wilks [38] ).One of the first to experiment with an AI approach to machine translation was Yorick Wilks [38] who used a method of semantic analysis directly on English texts and thus attempted to bypass problems of syntactic analysis. He also introduced the notion of 'preference semantics': dictionary entries did not stipulate obligatory features but only indicated preferred ones (e.g. drink did not insist that subject nouns always be 'animate', it would allow abnormal and metaphoric usages such as cars drink petrol). Wilks made use of 'commonsense inferences' to link pronouns and their antecedent nouns. For example, in The soldiers fired at the women and we saw several of them fall the linking of the pronoun them to women rather than to soldiers is made by a 'commonsense rule' stating that animate objects are likely to fall if they are hit. A more advanced mechanism for making inferences is embodied in the notion of 'scripts'. At Yale University, Carbonell has recently [39] devised a rudimentary 'interlingual' machine translation system based on the story-understanding model of Roger Schank and associates. A simple English text, the report of an accident, is analysed into a language-independent conceptual representation by referring to 'scripts' about what happens in car accidents, ambulances and hospitals, etc. in order to 'understand' the events described. The resulting representation is the basis for generating texts in Russian and Spanish using methods rather similar to those in the Transfer and Synthesis programs of TAUM, SUSY and GETA. Finally, mention should be made of the research at Heidelberg on the SALAT system of machine translation [40] , a 'transfer' system of the GETA type, which is experimenting with 'deduction' processes to resolve problems with pronouns, to decide between alternative analyses and to determine the correct translation of lexical elements.There are naturally many reservations about the feasibility of using methods of artificial intelligence in machine translation systems; the complexities of knowledge-based procedures in a full-scale system can only be guessed at. It is apparent that any modern system must have sufficient flexibility to experiment with different methods of analysis, including AI methods, to make realistic comparisons of their effectiveness and to incorporate new approaches without detrimental effects on any existing successful procedures. This kind of flexibility in both computational and linguistic processes is to be an integral feature of the multilingual EUROTRA system. The project for an advanced machine translation system to deal with all languages of the European Communities has been established and funded by the Commission after widespread consultations. The project has been set up as a cooperative effort, involving at present the expertise of researchers in six European countries. In general design, EUROTRA represents the culmination of recent thinking in the field [17, 41] . It will be basically a 'transfer' system incorporating the latest advances in semantics and artificial intelligence, with the transfer components kept as simple as possible. As in all modern systems it will maintain strict separation of algorithmic processes and linguistic data, it will be highly 'modular' in structure enabling linguists and programmers to develop individual parts independently and to experiment with new methods, it will be hospitable to data created on other systems (e.g. the dictionaries and topical glossaries of SYSTRAN [17] ) and it is intended to be easily adaptable to other computer facilities and networks, in particular to future computer systems. EUROTRA is being designed from the beginning as a multilingual system which will be able to produce translations simultaneously in many languages. It is an ambitious project involving considerable complexities in organisation, collaboration and coordination [41], but it is not unrealistic and it inaugurates a genuine step forward in the evolution of machine translation.This description of the evolution of MT systems has been essentially chronological. Many writers refer to 'generations' of machine translation, usually in order to promote their own system as an example of the latest generation. For some the first generation is represented by the simple word-by-word systems, the second generation added syntactic analysis and the third incorporated semantics of some kind [5, 20] . For others the first generation is represented by the 'direct translation' systems, the second by the 'indirect' systems and the third by systems based on artificial intelligence approaches [2, 42] . As a result SYSTRAN, for example, is sometimes classified as a 'third generation' system because it incorporates some semantic analysis, and sometimes as a 'first generation' system because it adopts the 'direct translation' approach. In addition, there is no place for the 'interactive' systems unless we regard them as 'transitional' stages between generations, as does Melby [38] with the Brigham Young system, or as 'hybrid' forms -i.e. CULT would belong to the first generation as a 'direct' system and Brigham Young to the second as an 'interlingual' system.It appears, however, that research on machine translation falls into fairly distinct periods. (Information on when projects and systems started and finished, as well as other basic data, will be found in the table attached to this paper.) The first period extended from the end of the Second World War until the Georgetown public demonstration of machine translation in 1954. It was a period of mainly small-scale experiments using word-by-word methods. The second period, which lasted until the ALPAC report in 1966, was characterised by vast U.S. governmental and military support of Russian-English systems based on the 'direct translation' approach. In the third period, when support was reduced and machine translation suffered widespread public neglect, research concentrated on 'interlingual' and 'transfer' approaches while, at the same time, 'direct' systems were further developed and became operational in a number of locations. The fourth period began about 1975 with the interest of the Commission of the European Communities in the possibilities of machine translation, marked by the trials of SYSTRAN and the sponsorship of the international EUROTRA project. At about the same time, 'interactive' systems came to public notice and the potential application of AI research began to be discussed. Furthermore, since 1976 there have been a number of conferences [43, 44, 45] indicating a quickening of general interest in the future of machine translation. This fourth period may well prove to be the most exciting and promising of them all. | null | null | null | null | Main paper:
the first period, 1946-1954: the earliest experiments:
Although there had been proposals for translation machines in the 1930's (see Zarechnak [8] for details), the real birth of machine translation came after the war with the general availability of the digital computer. From 1946 there were some simple experiments by Booth and Richens in Britain, mainly on automatic dictionaries, but it was the memorandum sent by Warren Weaver in 1949 [9] to some 200 of his acquaintances which launched machine translation as a scientific enterprise. Weaver had been impressed by the successful use of computers in breaking enemy codes during the war and suggested that translation could also be tackled as a decoding problem. He admitted that there were difficult semantic problems but mentioned the old idea of a 'universal language' as a possible intermediary between languages. Before long there were projects underway at many American universities. The early systems were invariably attempts to produce translations by taking the words of a text one at a time, looking them up in a bilingual dictionary, finding the equivalents in the target language and printing out the result in the same sequence as in the source text. If a word happened to have two or more possible translations, they were all printed. The method was obviously unsatisfactory and it was not long before attempts were made to rearrange the sequences of words, which meant that some kind of syntactic analysis was needed.
the second period, 1954-1966: optimism and disillusion:
In 1954 the research team at Georgetown University set up a public demonstration intended to show the technical feasibility of machine translation. With a vocabulary of just 250 Russian words, only six rules of grammar and a carefully selected sample of easy Russian sentences, the system demonstrated had no scientific value but, nevertheless, it encouraged the belief that translation by computer had been solved in principle and that the problems remaining were basically of an engineering nature [5, 8] . In the next ten years, research in the United States was supported on a massive scale -at 17 institutions to the tune of almost 20 million dollars, it has been estimated [7] -but the promised 'break-throughs' did not materialise, optimistic forecasts of commercial systems 'within five years' came to nothing, awareness of serious linguistic problems increased, and above all the translations produced were usually of very poor quality. In 1964 the National Science Foundation set up the Automatic Language Processing Advisory Committee (ALPAC) at the instigation of sponsors of machine translation. It reported in 1966 [10] that machine translation was slower, less accurate and twice as expensive as human translation and recommended no further investment. Research in the United States suffered immediate reductions and machine translation became no longer a 'respectable' scientific pursuit.Although the report was widely condemned as biased and shortsighted -see Locke [5] and Josselson [11] -its negative conclusions are not surprising when we look at the systems in operation or under development at the time. For example, the Mark II system installed in 1964 to produce Russian-English translations for the U.S. Air Force was only a slightly improved version of one of the earliest word-by-word systems (Kay [12] ). The translations required extensive 'post-editing' and were not rated highly.The general strategy employed in systems during this period until the mid-1960's was the 'direct translation' approach ( fig. 1 ): systems were designed in all details specifically for one pair of languages, nearly always, at this time, for Russian as the source language (SL) and English as the target language (TL). The basic assumption was that the vocabulary and syntax of SL texts should be analysed no more than necessary for the resolution of ambiguities, the identification of appropriate translations and the specification of the word order of TL texts. Syntactic analysis was designed to do little more than recognition of word classes (nouns, verbs, adjectives, etc.) in order to deal with homographs (e.g. control as verb or noun). Semantic analysis was rare, being restricted to the use of features such as 'male', 'concrete', 'liquid' etc. in cases where context could resolve ambiguities (e.g. foot cannot be 'animate' in the contexts foot of the hill and foot of the stairs.)A typical example is the Georgetown University system, which in fact proved to be one of the most successful using the 'direct' approach [12, 13] . In 1964 Russian-English systems were delivered to the U.S. Atomic Energy Commission and to Euratom in Italy; both were in regular operation until very recently. The Georgetown research team adopted what Garvin was later [14] to call the 'brute force' method of tackling problems: a program would be written for a particular text corpus, tested on another corpus, amended and improved, tested on a larger corpus, amended again, and so forth. The result was a monolithic program of intractable complexity, with no clear separation of those parts which analysed SL texts and those parts which produced TL texts. Syntactic analysis was rudimentary; there was no notion of grammatical rule or syntactic structure, even less of a 'theory' of language or translation. In addition, any information about the grammar of English or Russian which the program used was incorporated in the very structure of the program itself. Consequently modification of the system became progressively more and more difficult [12] . In fact, both the Georgetown systems remained unchanged after their installation in 1964.During this period linguistics had very little impact in practice on the design of machine translation systems. The tradition of Bloomfield which dominated American linguistics in the 1940's and 1950's concentrated on descriptive techniques and on problems of phonology and morphology; it had little interest in syntax or in semantics. Nevertheless, there were some researchers who developed methods of syntactic analysis based on explicit theoretical foundations. For example, Paul Garvin [14] developed his 'fulcrum' method which produced phrase structures indicating dependency relations between constituents, e.g. adjective to noun, noun to finite verb, noun to preposition (see fig. 2 ). The method was adopted in the Wayne State University project, which revealed its shortcomings; after ten years' work (1959) (1960) (1961) (1962) (1963) (1964) (1965) (1966) (1967) (1968) (1969) (1970) (1971) (1972) a very complex program was still unable to parse Russian sentences with more than one finite verb [15] . However, by this time Chomsky had already shown [16] why such syntactic models, in particular the equivalent and more familiar phrase structure version ( fig. 3 ), were in principle inadequate for the representation and description of the syntax of natural languages. Chomsky proposed the transformationalgenerative model which linked 'surface' phrase structures to 'deep' phrase structures by transformational rules.In a survey of machine translation in 1960 Bar-Hillel [6] did not doubt that methods of syntactic analysis could be greatly improved with the help of linguistic theory, but he expressed his conviction that semantic problems could never be completely resolved and that, therefore, high-quality translation by computer was impossible in principle.
the third period, 1966-1975: diversification of strategies:
After the ALPAC report in 1966, research in machine translation continued for some time on a much reduced scale. Its goals had become more realistic; no longer were translations expected to be stylistically perfect, the aim was readability and fidelity to the original. On the other hand, there emerged a number of linguistically more advanced systems based on 'indirect' approaches to system design and there was a welcome increase in the variety of source and target languages.Research continued throughout on 'direct translation' systems. Two of them became fully operational systems during this period. The best known is SYSTRAN, designed initially as a Russian-English system and used in this form by the U.S. Air Force since 1970. Later it was adapted for English-French translation and this version was delivered in 1976 to the Commission of the European Communities. At various stages of development are further versions for French-English and English-Italian translation [17, 18] . SYSTRAN may be regarded as essentially a greatly improved descendant of the Georgetown 'direct translation' system. Linguistically there is little advance, but computationally the improvements are considerable. The main ones lie in the 'modularity' of its programming, allowing for the modification of any part of the processes to be undertaken without the risk of impairing overall efficiency, and in the strict separation of linguistic data and computational processes. It is therefore able to avoid many of the irresolvable complexities of the monolithic Georgetown system. 4 ). The Input program loads the text and the dictionaries, and checks each word against a High Frequency dictionary. Next the remaining words are sorted alphabetically and searched for in the Main Stem dictionary. Both dictionaries supply grammatical information, some semantic data and potential equivalents in the target language. The Analysis program makes seven 'passes' through each sentence: i) to resolve homographs, by examining the grammatical categories of adjacent words; ii) to look for compound nouns (e.g. blast furnace) in a Limited Semantics dictionary; iii) to identify phrase groups by looking for punctuation marks, conjunctions, relative pronouns, etc.; iv) to recognise primary syntactic relations such as congruence, government and apposition; v) to identify coordinate structures within phrases, e.g. conjoined adjectives or nouns modifying a noun; vi) to identify subjects and predicates; and vii) to recognise prepositional structures. The Transfer program has three parts: i) to look for words with idiomatic translations under certain conditions, e.g. agree if in the passive is translated as French convenir, otherwise as être d'accord; ii) to translate prepositions, using the semantic information assigned to words which govern them and which are governed by them; and iii) to resolve the remaining ambiguities, generally by tests specified in the dictionaries for particular words or expressions. The last stage Synthesis produces sentences in the target language from the equivalents indicated in the dictionaries, modifying verb forms and adjective endings as necessary, and finally rearranging the word order, e.g. changing an English adjective-noun sequence to a French noun-adjective sequence.Like its Georgetown ancestor, SYSTRAN is still basically a 'direct translation' system: programs of analysis and synthesis are designed for specific pairs of languages. However, in the course of time it has acquired features of a 'transfer' system, as we shall see below, in that the stages of Analysis, Transfer and Synthesis are clearly separated. In principle, the Analysis program of English in an English-French system can be adapted without extensive modification to serve as the Analysis program in an English-Italian system [20] . Likewise, the Synthesis programs are to some extent independent of particular source languages. Nevertheless, despite its 'modular' structure SYSTRAN remains a very complex system. The lack of explicit theoretical foundations and consistent methodology as far as linguistic processes are concerned gives many of its rules an ad hoc character. This is particularly apparent in the assignment of 'semantic features' to words and expressions in the dictionaries, as Pigott [21] has demonstrated.The other 'direct translation' system which became operational in this period was LOGOS, a system designed to translate American aircraft manuals into Vietnamese and said to be now in the process of adaptation for translating from English into French, Spanish and German [l] . Like SYSTRAN, its programs maintain a complete separation of the Analysis and Synthesis stages and so, although the procedures themselves are designed for a specific pair of languages, the programs are in principle adaptable for other pairs. In common with nearly all modern systems there is no confusion of programming processes and linguistic data and rules. But like SYSTRAN the linguistic foundations of the system are weak and inexplicit.By contrast, the systems which have adopted the 'indirect' approach have been greatly influenced by theories of linguistics. The possibility of translating via an intermediary 'universal' language had been suggested by Weaver in his memorandum [9] , but it was not until the 1960's that linguistics could offer any models to apply. The 'interlingual' approach to machine translation attracted two research teams in the early 1960's, at the University of Texas and at Grenoble University. In 'interlingual' systems translation is a two-stage process: from the source language into the interlingua and from the interlingua into the target language ( fig. 5 ). Programs of analysis and synthesis are completely independent, using separate dictionaries and grammars for the source and target languages. The systems are therefor designed so that further programs for additional languages can be incorporated without affecting the analysis and synthesis of languages already in the system.For the structure of an interlingua there was one obvious model at the time provided by Chomsky's theory of transformational grammar in its 1965 version [22] . It was argued that while languages differ greatly in 'surface' structures they share common 'deep structure' representations and that in any one language 'surface' forms which are equivalent in meaning (e.g. paraphrases) are derived from the same 'deep' structure. Consequently, 'deep structures' may be regarded as forms of 'universal' semantic representations. The Texas team adopted this model in a German-English system (METALS) intended to include other languages later [23] . Although they soon found that the Chomskyan conception of transformational rules would not work in a computer program of syntactic analysis -as did many others in computational linguistics (cf. Grishman [24] ) -they retained the basic transformational approach. The Analysis program in METALS was in three stages. On the basis of grammatical information from the source language dictionary, the program first produced several tentative 'strings' (sequences) of word-classes (nouns, verbs, etc.). The next stage examined each potential 'string' in turn and constructed for it possible phrase structure analyses; unacceptable strings were eliminated. In the third stage, semantic information from the dictionary was used to test the semantic coherence of the phrase structures (e.g. by testing for compatible semantic features of verbs and subjects). Then the acceptable phrase structures were converted into a 'deep structure' representation in which relationships between lexical items were given in terms of 'predicates' and 'arguments' ( fig. 6 gives an example of a METALS representation).An old man in a green suit looked at Mary's dog Figure 6 . METALS interlingual representation In the Grenoble system (CETA), designed for Russian-French translation [25] , the method of analysis was very similar in basic strategy. As in METALS, the first stage produced familiar 'surface' phrase structures, often more than one for a single sentence. But for 'deep structures' the Grenoble, team adopted the dependency model for representing relationships between lexical items ( fig. 7) . As in METALS, the representation is given in the propositional logical form of 'predicates' (verbs or adjectives) and their 'arguments' or 'actants' (nouns, noun phrases or other propositions). The linguistic model for CETA derives ultimately from Tesnière, but the team was much influenced by the Russian MT researcher Mel'chuk (for details see Hutchins [3] ).The formula explains the frequent appearance of the neutron.The generation of target language sentences from 'deep structure' representations was also designed on similar lines in the two systems. In the first stage of Synthesis lexical items of the source language were replaced by equivalents of the target language. Then, the resulting target language 'deep structure' was converted by a series of transformations using semantic, syntactic and morphological data provided by the target language dictionaries into 'surface' sentence forms.From this description it should be clear that neither system created a genuine interlingua; in both cases, the interlingua was restricted to syntactic structures; no attempt was made to decompose lexical items into semantic primitives, which would be necessary for interlingual semantic representations. The conversion of source language vocabulary into the target language was in both cases made through a bilingual dictionary of base forms of words or idioms. Consequently, some semantic equivalents could not be handled if there were different 'deep structures', e.g. He ignored and He took no notice of her, in METALS. In this respect, analysis did not go far enough. In other respects, however, it was found that analysis often went too far since it destroyed information about the 'surface' forms of source language texts which could have helped the generation of translated texts, e.g. information about which noun ('argument') was the subject, whether the verb was passive, and which clauses were subordinated. Even more serious perhaps was the rigidity of the processes: failure at one stage of analysis to identify components or to eliminate an incorrect parsing affected the performance of all subsequent stages. Too often, too many phrase structures were produced for each sentence: one common source of difficulty in English is the syntactic ambiguity of prepositional phrases, which can modify almost any preceding noun or verb. For example, on the table modifies put in The girl put the book on the table, but modifies book in The girl saw the book on the table; syntactic analysis alone cannot make the correct assignment, only semantic information (about the verbs put and see) can determine which phrase structure is acceptable (cf. figs. 2 and 3 ). The frequency of such syntactic indeterminacies results in the production of far too many phrase structures which are later found to be semantically incoherent. The CETA team concluded that what was needed was a more sensitive parser, one which could deal straightforwardly with simple sentences but which had access to a full battery of sophisticated analytical techniques to tackle more complex sentences.In retrospect, the 'interlingual' approach was perhaps too ambitious at that time: the more cautious 'transfer' approach was probably more realistic as well as being, as we shall see, flexible and adaptable in meeting the needs for different levels or 'depths' of syntactic and semantic analysis. In the 'transfer' approach both the source and target languages have their own particular 'deep structure' representations. Translation is thus a three-stage process ( fig. 8 ): Analysis of texts into source language representations, Transfer into target language representations, and Synthesis of texts in the target language. The goal of analysis is to produce representations which resolve the syntactic and lexical ambiguities of the language in question, without necessarily providing unique representations for synonymous constructions and expressions. No analysis is made of elements which might have more than one correspondent in target languages (e.g. English know and French connaître and savoir or German wissen and können). It is the task of Transfer components to convert unambiguous source language representations into the appropriate representations for a particular target language. This can involve restructuring to allow for different conditions attached to particular lexical elements, e.g. English remember is not a reflexive verb but its French equivalent souvenir is, and for differences in syntactic rules, e.g. English allows participle clauses as subjects (Making mistakes is easy) but French and German only infinitive clauses. The depth of syntactic analysis in 'transfer' systems is therefore in general much 'shallower' than more ambitious 'interlingual' systems which would attempt to formulate universal representations. Semantic analysis is also less ambitious, restricted primarily to resolution of homographs and tests of the semantic coherence of potential syntactic analyses. generates first an appropriate syntactic structure (given the constraints on lexical formations indicated by the French dictionary) and then produces the correct 'surface' morphological forms of verbs, adjectives and articles.Another example of a 'transfer' system is the Russian-German project at the University of Saarbrücken which began in 1967. The SUSY stages of analysis, transfer and synthesis [28, 29] have basic similarities to those of TAUM, with 'deep' representations also going no further initially than resolving ambiguities within the source language itself. However, problems with pronouns, complex verb groups and elision of nouns and verbs in Russian 'surface' forms demonstrated the necessity for 'deeper' analyses. Since about 1976, the transfer representations in SUSY have been more abstract, approximating more closely an 'interlingual' type of representation.Changes have also taken place in the TAUM representations in recent years. Experience on the Aviation project since 1977 has led to the introduction of partial semantic analysis in order to deal with the extremely complex noun phrases encountered in English technical manuals; thus, for example, the analysis of left engine fuel pump suction line would show ( fig. 11 ) functional (FUNCTION), locative (LOC), possessive (HAS) and object (OBJ) relations derived from semantic features supplied by the English dictionary [27] . Figure 11 . Semantic analysis in TAUM
the current period, since 1975: renewal of optimism:
These changes in TAUM and SUSY during the last five years or so have coincided with developments elsewhere which blur the previous clear typology of systems into 'direct', 'interlingual' and 'transfer'. At Grenoble there has been a fundamental rethinking of MT system design prompted by changes in computer facilities in 1971.The CETA system revealed disadvantages of reducing texts to semantic representations which eliminated useful 'surface' information. The new system GETA [30] is basically a 'transfer' system with stages of analysis, transfer and synthesis much as in TAUM and SUSY, but it retains the general form and 'depth' of the dependency-model representations of the previous Grenoble system. Although the ideal of interlingual representations is no longer the goal, it is intended that the 'deep structure' analyses should be of sufficient abstractness to permit transfer processes to be as straightforward as possible. These developments in GETA, TAUM and SUSY indicate there is now considerable agreement on the basic strategy, i.e. a 'transfer' system with some semantic analysis and some interlingual features in order to simplify transfer components. At the same time, even the 'direct translation' system SYSTRAN has acquired features of a 'transfer' approach in the separation of analysis, transfer and synthesis stages (cf. outlines of the TAUM and SYSTRAN systems in figs. 4 and 9) and in the consequently easier adaptability of SYSTRAN to new language pairs [20] .However, this apparent convergence of approaches in recent years is confined to the design of fully automatic systems dealing with uncontrolled text input and not involving any human intervention during the translation process itself. (The need for at least some human revision of translated texts from operational systems like SYSTRAN is a subsidiary process lying strictly outside the MT systems as such.) In the last five years or so there have appeared a number of 'limited language' systems and 'interactive' systems.One example of a system with limited syntax and semantics is METEO, developed by members of the Montreal team and using experience of TAUM, which has been translating English weather forecasts into French since 1976 [31] . Another is TITUS, which translates abstracts in the field of textile technology from and into English, French, German and Spanish. Abstracts are written in a standard regulated format, called the 'canonical documentation language', and translated via a simple code interlingua [32] . Such 'limited' systems are, of course, the practical application of what is common knowledge in the field, namely that systems can be more successful if the semantic range and syntactic complexity of texts to be translated can be specified. It is probably unrealistic to expect any MT system to deal with texts in all subjects; there are good practical reasons for providing topical glossaries, as in SYSTRAN, which can be selected as needed. There are possibilities that the selection of glossaries might be automated -there are pointers in the research at Saarbrücken on statistical techniques as aids in homograph resolution [28] and in research on 'sublanguages' by Kittredge [33] and others -but it could be argued that this is more easily and cheaply done by someone knowledgeable in the field concerned.The attractiveness of 'interactive' machine translation lies precisely in making the best use of both human translators and computers in fruitful collaboration. There are good arguments, practical and economic, for using the computer only for what it can do well, accessing large dictionaries, making morphological analyses and producing simple rough parsings, and for using human skills in the more complex processes of semantic analysis, resolving ambiguities and selecting the appropriate expression when there is a choice of possible translations. Interactive systems offer the realistic possibilities of high-quality translation -a prospect which is still distant in fully automatic systems. The best known interactive system is CULT, which has been producing English translations of Chinese mathematical texts since 1975. Also well known is the system at Brigham Young University (now known as ALPS) for translating English texts simultaneously into French, German, Spanish, Portuguese and eventually many other languages. And most recently of all, there is the appearance of the Weidner system. The first experimental system was MIND in the early 1970's [34] ; this was based on the 'transfer' approach, with the computer interrogating a human consultant during the analysis stage about problems with homographs or about ambiguities of syntax, e.g. the problem of prepositional phrases mentioned earlier. CULT is basically a 'direct translation' system [35] , involving human participation during analysis for the resolution of homographs and syntactic ambiguities and during synthesis for the insertion of English articles and the determination of verb tenses and moods. The Brigham Young system is 'interlingual' in approach [36] , with close human interaction during the analyses of English text into 'deep structure' representations (in 'junction grammar', a model with some affinities to Chomskyan grammars), but with as little as possible during synthesis processes.The Brigham Young system is regarded by its designers as a transitional system using human skills to overcome the problems of systems like GETA and TAUM until research in artificial intelligence has provided automatic methods. Researchers in machine translation have taken an increasing interest in the possibilities of artificial intelligence, particularly during the last five years or so. In 1960 Bar-Hillel [6] believed he had demonstrated the impossibility of high-quality machine translation when he argued that many semantic problems could be resolved only if computers have access to large encyclopaedias of general knowledge. (His particular example was the homograph pen in the simple sentence The box is in the pen. We know it refers to a container here and not to a writing instrument, but only because we know the size and form of the objects.) However, it is precisely problems of text understanding involving knowledge structures which have been the subject of much research in artificial intelligence (see Boden [37] for references). As yet, little attention has been paid directly to problems of translation, despite arguments that machine translation provides an objective testbed for AI theories (Wilks [38] ).One of the first to experiment with an AI approach to machine translation was Yorick Wilks [38] who used a method of semantic analysis directly on English texts and thus attempted to bypass problems of syntactic analysis. He also introduced the notion of 'preference semantics': dictionary entries did not stipulate obligatory features but only indicated preferred ones (e.g. drink did not insist that subject nouns always be 'animate', it would allow abnormal and metaphoric usages such as cars drink petrol). Wilks made use of 'commonsense inferences' to link pronouns and their antecedent nouns. For example, in The soldiers fired at the women and we saw several of them fall the linking of the pronoun them to women rather than to soldiers is made by a 'commonsense rule' stating that animate objects are likely to fall if they are hit. A more advanced mechanism for making inferences is embodied in the notion of 'scripts'. At Yale University, Carbonell has recently [39] devised a rudimentary 'interlingual' machine translation system based on the story-understanding model of Roger Schank and associates. A simple English text, the report of an accident, is analysed into a language-independent conceptual representation by referring to 'scripts' about what happens in car accidents, ambulances and hospitals, etc. in order to 'understand' the events described. The resulting representation is the basis for generating texts in Russian and Spanish using methods rather similar to those in the Transfer and Synthesis programs of TAUM, SUSY and GETA. Finally, mention should be made of the research at Heidelberg on the SALAT system of machine translation [40] , a 'transfer' system of the GETA type, which is experimenting with 'deduction' processes to resolve problems with pronouns, to decide between alternative analyses and to determine the correct translation of lexical elements.There are naturally many reservations about the feasibility of using methods of artificial intelligence in machine translation systems; the complexities of knowledge-based procedures in a full-scale system can only be guessed at. It is apparent that any modern system must have sufficient flexibility to experiment with different methods of analysis, including AI methods, to make realistic comparisons of their effectiveness and to incorporate new approaches without detrimental effects on any existing successful procedures. This kind of flexibility in both computational and linguistic processes is to be an integral feature of the multilingual EUROTRA system. The project for an advanced machine translation system to deal with all languages of the European Communities has been established and funded by the Commission after widespread consultations. The project has been set up as a cooperative effort, involving at present the expertise of researchers in six European countries. In general design, EUROTRA represents the culmination of recent thinking in the field [17, 41] . It will be basically a 'transfer' system incorporating the latest advances in semantics and artificial intelligence, with the transfer components kept as simple as possible. As in all modern systems it will maintain strict separation of algorithmic processes and linguistic data, it will be highly 'modular' in structure enabling linguists and programmers to develop individual parts independently and to experiment with new methods, it will be hospitable to data created on other systems (e.g. the dictionaries and topical glossaries of SYSTRAN [17] ) and it is intended to be easily adaptable to other computer facilities and networks, in particular to future computer systems. EUROTRA is being designed from the beginning as a multilingual system which will be able to produce translations simultaneously in many languages. It is an ambitious project involving considerable complexities in organisation, collaboration and coordination [41], but it is not unrealistic and it inaugurates a genuine step forward in the evolution of machine translation.
tailpiece: summary of evolutionary stages:
This description of the evolution of MT systems has been essentially chronological. Many writers refer to 'generations' of machine translation, usually in order to promote their own system as an example of the latest generation. For some the first generation is represented by the simple word-by-word systems, the second generation added syntactic analysis and the third incorporated semantics of some kind [5, 20] . For others the first generation is represented by the 'direct translation' systems, the second by the 'indirect' systems and the third by systems based on artificial intelligence approaches [2, 42] . As a result SYSTRAN, for example, is sometimes classified as a 'third generation' system because it incorporates some semantic analysis, and sometimes as a 'first generation' system because it adopts the 'direct translation' approach. In addition, there is no place for the 'interactive' systems unless we regard them as 'transitional' stages between generations, as does Melby [38] with the Brigham Young system, or as 'hybrid' forms -i.e. CULT would belong to the first generation as a 'direct' system and Brigham Young to the second as an 'interlingual' system.It appears, however, that research on machine translation falls into fairly distinct periods. (Information on when projects and systems started and finished, as well as other basic data, will be found in the table attached to this paper.) The first period extended from the end of the Second World War until the Georgetown public demonstration of machine translation in 1954. It was a period of mainly small-scale experiments using word-by-word methods. The second period, which lasted until the ALPAC report in 1966, was characterised by vast U.S. governmental and military support of Russian-English systems based on the 'direct translation' approach. In the third period, when support was reduced and machine translation suffered widespread public neglect, research concentrated on 'interlingual' and 'transfer' approaches while, at the same time, 'direct' systems were further developed and became operational in a number of locations. The fourth period began about 1975 with the interest of the Commission of the European Communities in the possibilities of machine translation, marked by the trials of SYSTRAN and the sponsorship of the international EUROTRA project. At about the same time, 'interactive' systems came to public notice and the potential application of AI research began to be discussed. Furthermore, since 1976 there have been a number of conferences [43, 44, 45] indicating a quickening of general interest in the future of machine translation. This fourth period may well prove to be the most exciting and promising of them all.
:
The development of MT system design is described in four periods: the early experimental period , the period of large-scale research on 'direct translation' systems , the period after the ALPAC report in which the 'interlingual' and 'transfer' approaches were developed , and the current period in which interactive systems and 'artificial intelligence' approaches have appeared together with proposals for the multilingual system EUROTRA (since 1975).The evolution of machine translation has been influenced by many factors during a quarter century of research and development. In the early years the limitations of computer hardware and the inadequacies of programming languages were crucial elements, and they cannot be said to be trivial even now. Political and economic forces have influenced decisions about the languages to be translated from, the source languages as they are commonly called, and the languages to be translated into, the target languages. In the 1950's and 1960's concern in the United States about Soviet advances in science and technology encouraged massive funding of experimental Russian-English systems. Today the bicultural policy of Canada justifies support of English-French translation systems and the multilingual policy of the European Communities has led to sponsorship of research into a multilingual system. Other obviously important factors have been the intelligibility and readability of translations and the amount of 'post-editing' (or revising) considered necessary. This paper will concentrate, however, on the 'internal' evolution of machine translation, describing the various strategies or 'philosophies' which have been adopted at different times in the design of systems. It will be concerned only with systems producing fully translated texts; not, therefore, with systems providing aids for translators such as automatic dictionaries and terminology data banks. Only brief descriptions of major systems can be included -for fuller and more comprehensive treatments see Bruderer [l] and Hutchins [2] , where also more detailed bibliographies will be found; and for a fuller picture of the linguistic aspects of machine translation see Hutchins [3] . This account will also be restricted to systems in North America and Europe -for descriptions of research in the Soviet Union, which has evolved in much the same way, see Harper [4] , Locke [5] , Bar-Hillel [6] , Roberts and Zarechnak [7] , and other references in Hutchins [2] .
Appendix:
| null | null | null | null | {
"paperhash": [
"grishman|a_survey_of_syntactic_analysis_procedures_for_natural_language",
"danielson|artificial_intelligence_and_natural_man",
"hutchins|machine_translation_and_machine‐aided_translation",
"wilks|an_intelligent_analyzer_and_understander_of_english",
"lehmann|development_of_german-english_machine_translation_system",
"darnell|translation",
"king|eurotra_–_a_european_system_for_machine_translation",
"pigott|theoretical_options_and_practical_limitations_of_using_semantics_to_solve_problems_of_natural_langua",
"loh|machine_translations_of_chinese_mathematical_articles.",
"kay|the_mind_system"
],
"title": [
"A Survey Of Syntactic Analysis Procedures For Natural Language",
"Artificial Intelligence and Natural Man",
"Machine Translation and Machine‐Aided Translation",
"An intelligent analyzer and understander of English",
"Development of German-English Machine translation System",
"Translation",
"EUROTRA – A European System for Machine Translation",
"Theoretical options and practical limitations of using semantics to solve problems of natural langua",
"Machine Translations of Chinese Mathematical Articles.",
"The MIND System"
],
"abstract": [
"Abstract : The report includes a brief discussion of the role of automatic syntactic analysis, a survey of parsing procedures, past and present, and a discussion of the approaches taken to a number of difficult linguistic problems, such as conjunction and graded acceptability. It also contains precise specifications in the programming language SETL of a number of parsing algorithms.",
"sorts of philosophy in machine theories of memory. And, since one can test the resulting mechanisms, one can thereby indirectly test the underlying philosophies. Contrary to Bursen’s conclusion, not only the study of memory, but even the philosophy of mind is made more scientific due to machine theories. In conclusion, none of Bursen’s arguments provide any support for his final conclusion that ‘there cannot be a scientific, mechanistic, causal explanation for memory’ (p. 147).",
"The recent report for the Commission of the European Communities on current multilingual activities in the field of scientific and technical information and the 1977 conference on the same theme both included substantial sections on operational and experimental machine translation systems, and in its Plan of action the Commission announced its intention to introduce an operational machine translation system into its departments and to support research projects on machine translation. This revival of interest in machine translation may well have surprised many who have tended in recent years to dismiss it as one of the ‘great failures’ of scientific research. What has changed? What grounds are there now for optimism about machine translation? Or is it still a ‘utopian dream’ ? The aim of this review is to give a general picture of present activities which may help readers to reach their own conclusions. After a sketch of the historical background and general aims (section I), it describes operational and experimental machine translation systems of recent years (section II), it continues with descriptions of interactive (man‐machine) systems and machine‐assisted translation (section III), (and it concludes with a general survey of present problems and future possibilities section IV).",
"The paper describes a working analysis and generation program for natural language, which handles paragraph length input. Its core is a system of preferential choice between deep semantic patterns, based on what we call “semantic density.” The system is contrasted: with syntax oriented linguistic approaches, and with theorem proving approaches to the understanding problem.",
"Abstract : The report presents progress in theoretical linguistics, descriptive linguistics, lexicography, and systems design in the development of a German- English Machine Translation System. Work in the theoretical group concentration on intrasentential disambiguation and on improving certain parts of the system to achieve greater economy in processing. The linguistic group was engaged in correcting and updating the existing German and English lexical data bases by assigning syntactic and semantic selection restrictions to lexical items. Work in the system group concentrated on the reduction of the size of the existing LRS lexical data base without information loss, on the conversion of this data base to the LRS subscript format, on the construction of supporting programs to expedite and facilitate the updating of the LRS word lists, and on the construction of part of the LRS grammar maintenance and systems programs.",
"?well worth reproducing in English.?Edts., I. M. Gazette.] On the loss of the epithelium of the intestinal canal, consequent on the excessive secretion of fluid from its surface. We are all well acquainted with the fact, that in certain diseases the outer layers of the epithelial cells protecting the skin are thrown off in flakes ; and I believe that it is the same in Asiatic cholera as regards epithelial cells lining the surface of the jntestinal mucous membrane?a matter of greater pathological significance than that concerning the skin, because the intestinal epithelium is intended to guard more delicate and important structures than the cells that cover the cutis. The symptoms of cholera, however, are very much dependent on this desquamation of the epithelia?a fact which may be demonstrated by the aid of the microscope; but we are not to suppose that all parts of the intestinal canal are equally affected in cholera. The epithelium of the stomach suffers less than that of the intestines, and the upper part of the small intestines is not so deeply involved in the disease as the lower part of the ileum. In the duodenum, where the peristaltic action of the canal is not very strong, you often find the epithelial cells lining the mucous membrane ; the cells are loosened, but riot detached, because this part of the canal has less mechanical work to do than the lower portion of the gut. The valvulae conniventes (kerkring), which are large and closely approximated in the second part of the duodenum, protect by covering in the epithelial celh that lie between them, but on the surface of these folds wo shall observe the commencement of the desquamative process which is so marked in the ileum. We shall see with the naked eye that the epithelium, which should cover the valvulae conniventes, has disappeared in places, leaving small isolated patches of the denuded mucous membrane. In their early stages, these spots are distinguishable by their whiter colour, and by a soft velvet-like texture, which may be well demonstrated if a spot of this kind is isolates1, and fixed on a plate under the object glass of the microscope, little water being allowed to trickle over it. You may also in this way examine the villi, which are clearly denuded of epithelial cells in the patches of the valvulae conniventes above referred to. In some parts wo notice that a space evidently extends through the length of the villi, and externally the villi are covered",
"1. Lessons from the past Previous articles in this Journal will have given the reader an idea of the state of the art in currently operational machine translation Systems. This article describes a system whieh is planned, and which it is hoped will be developed by all the Member States of the European Community acting together, within the framework of a single collaborative project. The motivation for such a project is manifold. First, we have learnt a great deal from the Systems which already exist, both in terms of what to do and in terms of what not to do. To take the positive lessons first: the most important, of course, is that machine aided translation is feasible. This lesson is extremely important. After the disappointments of the 60's, it took a great deal of courage to persist in the belief that it was worthwhiie working on machine translation. A great debt is owed to those who did persist, whether they continued to develop commercial Systems with the tools then available or whether they carried on with the research needed to provide a sound basis for more advanced Systems. Had it not been for their stubbornness, machine translation would now be one of those good ideas which somebody once had, but which proved in the end impractical like a perpetual motion machine, for example instead of being a discipline undergoing a period of renaissance and new growth. Secondly, we have learnt that problems which once seemed intractable are not really so. Looking at a book on machine translation written in the early 60's the other day, I was surprised to find the treatment of idioms and of semi-fixed phrases being discussed äs a difficult theoretical problem. Of course, idioms still must be treated, and must be treated with care, but operational Systems have shown us that they can be succesfully translated. This does not mean that no system will ever again translate \"out of sight, out of mind\" äs \"invisible idiot\", but if it does so, it will be for lack of relevant data, not because mechanisms to deal with such phrases are not adequate. It would be possible to make a fairly extensive list of similar problems, which once gave machine translators nightmares but now ortly cause mild insomnia. Suffice it to say that experience with existing Systems has given us the knowledge that such problems can be solved, and the courage to find ever better ways of solving them. At a technical level, too, we have learnt a lot from existing systems. Early, not very succesful, machine translation Systems were dictionary based, essentially taking one word at a time and trying to find its equivalent in the target language. As a fairly natural reaction to the disappointing results obtained by such a method, there was something of a swing later to concentrating on the linguistic änalysis parts of the system, those parts which tried to determine the underlying structure of the input text in order to translate at a \"deeper\" level. Practical experience has taught us that even though änalysis is cruciäl, dictionaries retain a great importance, in that any working system will rely heavily on large dictionaries, sometimes containing whole expressions äs single entries, rieh in static linguistic Information on each entry and serving äs essential data for the translation process. So we have learnt to pay attention both to the initial design and coding of dictionaries, and to their manipulation in terms of large data bases which must be constantly updated and maintained. Based on rather more negative experience, we have learnt that system design is all important in a machine translation system. This can be said rather differently, by saying that we have discovered that a translation system is necessarily going to be big and that big Systems need special treatment. No one person, or even group of persons, can hope to keep a large Computer program under control if it is written äs an amorphous riiass. It will be impossible, when things",
"Within the framework of its Multilingual Action Plan, the Commission of the European Communities has, for the past three years, been involved in the practical development of a machine translation system (Systran, designed by Peter Toma, World Translation Center, La Jolla, California). Of the language couples covered to date, the EnglishFrench pair is certainly the most highly developed, yet it may well be that ultimately the quality of translation obtained from the other systems under development (French-English and English-Italian) will be more acceptable.",
"A practical machine translation system called CULT (Chinese University Language Translator), capable of translating Chinese mathematical texts into readable English, has been developed during the period 1969-77 at the Chinese University of Hong Kong. The design of CULT is based on the algorithm discussed in Section 3; programs for the system are written in Standard FORTRAN and run on the ICL1904A computer system. This system has been modified, improved, and rigorously tested, and its potential and capabilities have been amply demonstrated. Since January 1975 CULT has been used on a regular basis to translate Acta Mathematica Sinica, a scientific journal which is published by the Chinese Academy of Science in Peking.",
"The MIND system is a single computer program incorporating an extensible set of fundamental linguistic processors that can be combined on command to carry out a great variety of tasks from grammar testing to question-answering and language translation. The program is controlled from a graphic display console from which the user can specify the sequence of operations, modify rules, edit texts and monitor the details of each operation to any desired extent. Presently available processors include morphological and syntactic analyzers, a semantic file processor, a transformational component, a morphological synthesizer, and an interactive disambiguator."
],
"authors": [
{
"name": [
"R. Grishman"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"Peter Danielson"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"W. J. Hutchins"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"Y. Wilks"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"W. Lehmann",
"R. Stachowitz"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"R. Darnell"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"M. King"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"I. Pigott"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"Shiu-chang Loh"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"M. Kay",
"G. Martins"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
}
],
"arxiv_id": [
null,
null,
null,
null,
null,
null,
null,
null,
null,
null
],
"s2_corpus_id": [
"35603068",
"147087796",
"17995807",
"5968738",
"60834317",
"215102261",
"33545038",
"36751131",
"61282378",
"43606749"
],
"intents": [
[],
[],
[],
[],
[],
[],
[],
[],
[],
[]
],
"isInfluential": [
false,
false,
false,
false,
false,
false,
false,
false,
false,
false
]
} | null | 519 | 0.026975 | null | null | null | null | null | null | null | null |
ad065038904a520dcaa35a9868e245c0b5d4a22b | 53861046 | null | Working with the Weidner machine-aided translation system | I have been asked to speak to you today about the Weidner machine-aided translation system. To do this I shall first tell you a little something about the company I work for, Mitel, and why we decided to purchase a machine-aided translation system. Mitel is a large manufacturer of telephone switching systems which are sold in Canada, the United States, Europe, the Caribbean, the South Pacific, the Far East and in South America. Being able to provide documentation in a number of different languages is therefore a key to our success worldwide. However, this is not as simple as it may appear on the surface. Because of changes and improvements to our products every six months or so, our documentation must be changed accordingly -some 4500 pages or approximately 2 million words per language every six months. Of course, not every word in the documentation is changed, but if a paragraph here and a paragraph there are changed, it can still result in a sizeable amount. Add to this the documentation for our new products and one can come up with a very large figure. The problems associated with this are numerous: the costs of hiring four or five or six translators per language are too high, using freelance people is expensive, and the problems of finding qualified people with telephony as a speciality and of terminology between different translators arise. Add the costs of retyping, outside typesetting and printing and translation becomes almost unaffordable for a company like Mitel. It must therefore be kept in-house. For this reason we decided to purchase the Weidner system, which is designed for a mini-computer such as the Digital PDP 11/34. Because we have three languages on the system, we use a slightly larger computer, the PDP 11/70. We have hooked this up to a number of other machines we have at our disposal in order to help things move along quickly and cost-effectively. For example, we have at Mitel a Xerox 9700 laser printer to do our printing for us and an Addressograph Multigraph/ECRM digitizer, which scans illustrations and transfers them to the new document. These machines allow a document to be printed, including all illustrations, charts and tables, without the need for regular typesetting. A text which is entered into the computer is coded in such a way that it is formatted, the illustrations added at the correct place and the desired font or typeface used. All the coding is done when the English text is input -usually by someone in the publications department and not by the translator. All that is left for the translator to do is to recall the required English file at the desired time and do the translation. Since the text has already been formatted, the translated version will be printed in the same manner. It should be noted that the typesetting codes have no effect on the translation function of the computer. It should also be noted that all of this is done completely electronically -the translator does not need a written document unless he personally prefers to work with one. All of which brings us to the Weidner system and how we do our translation at Mitel. As mentioned, we have three language pairs in our system, English-French, which has been running for about 1½ years; English-Spanish, which has been running for about 8 months; and English-German, which we received from Weidner only a very short time ago. The following functions are available to us in the system: A. The Amender, where we can enter texts to be translated and revise translated texts. It is quite a powerful word processor. B. Vocabulary Search is used to ask the computer which words in the source test are unfamiliar to it. C. Deferred Vocabulary Search has the same function as Vocabulary Search, except that the operator Instructs the computer when to start the function. This is valuable if the computer is being used heavily during the day and the text to be searched is very long. The operator can instruct the computer to do the vocabulary search at night when it will not interfere with or slow down the work of other users. Conversely, if there are other people using the system, the vocabulary search itself will take longer than it 46 M.G. HUNDT does during off-hours and a lot of time will be wasted. It is therefore to everyone's advantage if this is done overnight. D. Dictionary Update. From the translator's point of view, this is the key to the whole system. This is where the translator enters his vocabulary. This ensures that his terminology remains consistent throughout his translations because once a word is entered into the computer's memory, it will always be translated in the same way. This is also where the computer is "taught" the grammar of the language. The translator tells the computer the word's gender and plural if it is a noun, its inflection if a verb, its agreement if adjective or adverb. This must be done precisely if the translation is to come out properly in the end. E. Listing Utility. This is used to obtain a print-out of one's dictionary. The dictionary can now be carefully studied to see if something has been entered incorrectly. F. Translate. This is self-explanatory and is probably the easiest function for the translator himself and the most complicated for the computer. G. Deferred Translation has the same function as Translate, but like Deferred Vocabulary Search, it can be done during off-hours so as not to interfere with other computer users. H. Synonym Update. This enters synonyms for words in the target language into the computer's memory. We will look at this more carefully later. I. Translation Process Monitor. This option allows the translator to see what the computer is working on. Active, future as well as finished jobs can be displayed. When an active job is finished, it disappears from the screen and the terminal "beeps" to let the translator know that the job is finished. The language pairs as well as the file names are displayed. J. Manager Utilities is a command which the translator doesn't need to use. We leave all of the system problems to the system manager. | {
"name": [
"Hundt, Michael G."
],
"affiliation": [
null
]
} | null | null | Translating and the Computer: Practical experience of machine translation | 1981-11-01 | 2 | 6 | null | null | null | null | done overnight. D. Dictionary Update. From the translator's point of view, this is the key to the whole system. This is where the translator enters his vocabulary. This ensures that his terminology remains consistent throughout his translations because once a word is entered into the computer's memory, it will always be translated in the same way. This is also where the computer is "taught" the grammar of the language. The translator tells the computer the word's gender and plural if it is a noun, its inflection if a verb, its agreement if adjective or adverb. This must be done precisely if the translation is to come out properly in the end. E. Listing Utility. This is used to obtain a print-out of one's dictionary. The dictionary can now be carefully studied to see if something has been entered incorrectly. F. Translate. This is self-explanatory and is probably the easiest function for the translator himself and the most complicated for the computer.G. Deferred Translation has the same function as Translate, but like Deferred Vocabulary Search, it can be done during off-hours so as not to interfere with other computer users.H. Synonym Update. This enters synonyms for words in the target language into the computer's memory. We will look at this more carefully later.I. Translation Process Monitor. This option allows the translator to see what the computer is working on. Active, future as well as finished jobs can be displayed. When an active job is finished, it disappears from the screen and the terminal "beeps" to let the translator know that the job is finished. The language pairs as well as the file names are displayed.J. Manager Utilities is a command which the translator doesn't need to use. We leave all of the system problems to the system manager.At Mitel the translator gets the English text in a pre-publication form. The basic text is there and needs only a few revisions from the technical writers and engineers. At this point, about six weeks before the final document is printed, the translator runs the text through the vocabulary search procedure. He finds out which words are unfamiliar to the computer and enters them into the dictionary. The procedure is quite simple. From the list of options, one chooses "B", Vocabulary Search. The computer asks you which file to search through. After the search has been completed, the results can be displayed on the terminal screen or printed out. Note that the line in which the word is located is also displayed. From this, the context of the word can be derived. The words can also be listed alphabetically or depending on their frequency in the text; however, we have found that the context option is more useful. Now the translator enters the unknown words into the dictionary. This is the most complicated and most important part of the Weidner system. The information given the computer must be exact for the translation to come out properly in the end. Let us look at a few examples.For an example of a noun entry, let us look at the Spanish word for "house", which is "casa". The first question the computer asks is, "Is this word a homograph?" that is, does this word have more than one translation? The answer in this case is yes since the English can also be translated by "albergar", which is the verb "to house". For our purposes, let us look only at "casa". We enter the word in the appropriate space. Following this, one is asked the following questions:-Part of Speech (Verb, Noun, Adjective, Adverb, etc.). We have just received a new version of the Weidner system. In the old version, we could only enter verbs, nouns, adjectives and adverbs. Now we can also enter prepositions, conjunctions and so on. However, the old four are still the most important and I shall concentrate on these.-Gender (Masculine, Feminine, Either).-Number. This is a question relating to special nouns, which are always plural in the original but always singular in the target language or vice-versa.-Agency (Human, Group, Body Part, Animal, Inanimate, Concrete, Abstract). Weidner has yet to47 satisfactorily explain these categories and the reasons for them to us, since most of them have no direct effect on the translation.-Is this a proper noun? (No or Yes.) -Is this a noun of time?-Is this a noun of place?-Does this translation present the "ING" form of the source word? "ING" nouns present a special problem because they can be interpreted by the computer as a verbal form. Nouns such as "building" must therefore be confirmed as such in this step.Once these questions have been answered, a check step or a chance to make changes is included should one decide that there is something needing correction.After this, the computer goes on to the next step. Here the translator is asked how the translation corresponds to the source language, that is, in what way is the translation influenced by the source word. In this case the correspondence is one to one, that is, the target word inflects according to how the source word inflects. This is more important with idioms, which we will look at later.On the next page the computer asks how the word inflects. It gives examples and asks if the words inflect like the examples. Sometimes, three or four examples are given and the translator must choose the one which inflects most like his word. We shall see an example of this later.After this the computer goes back to the first page and enters the word into the dictionary.For an example of a verb let us look at the German translation of the word "bite" which is "beißen". I have purposely chosen this verb because it is irregular. A very specific question is asked for irregular verbs, as we shall see in a moment.At the beginning we go through the same procedure as with a noun. Is this word a homograph? We answer yes for two reasons. First, the word "bite" can also be a noun. Secondly, we enter two forms for verbs. The first is the simple translation, "beißen"; the second is a variation on the translation. We enter "der beißen", which will become the translation for "the dog biting the man". The translation will appear as a relative clause, "der Hund, der den Mann beißt", introduced by the relative pronoun "der". For the purpose of this demonstration we will leave out the noun form.After we enter the translations, the computer again asks us a number of questions about the first translation, "beißen", including part of speech, which we saw in the first example, and the following questions designed specifically for verbs:-Agency (None, Direction, Location). -Is the past participle formed with "haben" or "sein"? -Character length of the separable prefix. Here one enters the number 0 through 15. In some German verbs, such as "hinterherlaufen", which means "to run after" or "to chase", a prefix exists, which is separated from the main body of the verb in certain tenses. In this case, the 9-character prefix, "hinterher" is separated from the main part of the verb, "laufen", and placed at the end of the sentence.-Does the verb include "ING" adjectives before a noun? In this case, the answer is yes, because the adjective, "biting", does occur before a noun: "the biting dog".-Does the verb include "ING" adjectives after a noun? -Does this form include "ED" adjectives? In this case, the answer is no, but an example is the verb "to desire" where the adjective is "desired".Again the chance to make any necessary changes is given before we go to the next step.Here the computer asks for a cross reference, or the correspondence, between the two languages, which we will come back to.In the next step, three questions are asked.-Is the verb weak or strong, that is, regular or irregular? If the answer is weak, or regular, then the next two questions are left out. However, if the answer is strong, or irregular, then the following two questions are asked.-Does the verb have an inseparable prefix, as is the case with some German irregular verbs?-Without a prefix, this verb inflects most like which of the following? Forty-two examples are given and the translator must decide which, if any, of the following verbs inflects like his. In this case it is No. 5, "leiden". In both verbs the "ei" in the infinitive changes to "i" in the imperfect and perfect: "leiden-littgelitten" and "beißen-biß-gebissen". After this, the computer goes to the next form of the verb, "der beißen", which is entered in exactly the same manner as the simple verb translation. Most of the questions which have just been answered are left out, so that not too much duplication of answers and time-wasting takes place.For an example of an adjective let us take a look at the French translation for the word "happy", which is "heureux". Again the question concerning homographs is asked, after which the translation is entered.After entering the part of speech into the computer, the following questions are put to the translator.-Does this adjective reorder? That is, if the adjective precedes the noun in the source language, does it follow the noun in the target language? "The happy man" becomes "L'homme heureux". The answer is yes.-If the adjective precedes an infinitive, insert: (Nothing, A, or De).-Is this adjective always plural? An example of this is "plusieurs" (many).Again the translator is allowed to make any changes at this point. From here he again goes to the next step, which is cross-referencing. After this he is asked one question concerning the declension of the adjective:-This adjective declines most like: (Doesn't Decline, Heureux, Vieux, Faux, Doux, Index). The computer wants to know how the adjective inflects in the feminine and plural. "Heureux" becomes "heureuse" in the feminine, but "vieux" becomes "vieille" and "faux" becomes "fausse" and so on. In our case, "heureux" declines like "heureux", so it is not a difficult choice at all. Different examples are given for adjectives not ending in "x".After this has been answered, the translation is written into the dictionary. Adverbs are entered in the same way and are very simple for the computer to handle.Let us now have a look at an example of an idiom. An idiom in this case is any phrase of two words or more. Since I am more familiar with German, we will use the example, "to rain cats and dogs", in German: "in Strömen regnen".First of all, all individual words in the English idiom must be separately entered into the dictionary, whether this individual meaning has anything to do with the idiom or not.Then we proceed as with any other entry. In this case, the idiom as a whole is a verb. Therefore, the same49 questions as for a verb are asked by the computer. The next step is an important step for idioms. This is the cross-referencing, which was mentioned a number of times earlier. Here we ask ourselves, which word, if any, in the translation corresponds to the source language. Looking under "CR-REF", we decide that "in" does not correspond to anything, so we enter a "0". We also decide that "Strömen" does not correspond to anything and we enter another "0". However, "regnen" corresponds to "rain", so we enter a "1", which is the number next to "rain", in the appropriate space. Now "regnen" will do in German what "rain" does in English; that is, if "rain" is plural in English, then "regnen" will be plural in German. After this, the computer asks us what type of word each individual word in the translated idiom is. Beside "in" we enter "other" because it is not a noun, verb, adjective or adverb. However, beside "Strömen" we also enter "other". You may now say to yourself that this is a noun. That is true, but in the idiom it never changes; it doesn't inflect in any way whatsoever. If we told the computer that it was a noun, it would try to inflect it and probably make a mistake in the translation. Beside "regnen", however, we must enter "verb".On the bottom left of the screen the computer then asks us for the key word in the idiom. It does not want the most important word in the idiom, but the word which occurs less frequently than the others in the computer's dictionary. This helps to speed up the computer's search for the idiom during translation. It has no direct effect on the translation itself and any word can be selected without causing any translation problems.Below this the computer asks the translator which source word governs the agreement with the target language. In this case it is no more than a confirmation of the cross reference above. We enter the number "1" because the source word, "rain", governs the agreement. However, in an idiom where there is absolutely no relationship between any of the words in the two languages, that is, where the cross references are all "0", one of the words in the source still influences the target language as to number, tense, etc. This is taken care of here.After this the idiom is written into the dictionary. As I'm sure you can understand, the dictionary entries must be done very carefully and accurately in order to receive the best possible computer translation. At Mitel we have found that idioms are an extremely important part of the translation procedure. You see, machine assisted translation is for the most part word replacement and we all know the problems that idiomatic expressions cause, even in regular translation. Therefore, it is important to enter as many idioms into the dictionary as possible. There is only one thing to watch out for and that is an idiomatic expression which also has a meaning if the words are taken separately. An example is the phrase, "to ring a bell". It literally means "ring a bell", but the idiom means "to arouse a response", as in "that name rings a bell". If the phrase has been entered as an idiom, the first meaning will never be translated. You, the translator, must therefore decide which phrase will occur most often in your work and make the entry accordingly. That is the nice thing about the Weidner dictionary. If there is a translation that will almost certainly never come up in one's field, one can delete it from the existing dictionary or leave it out altogether. Therefore, at Mitel we would not enter the idiom "to ring a bell" because we will never need to have that phrase translated, but in the telephone business, bells literally ring all the time.To see which words or phrases are already in the dictionary, the Listing Utilities option is selected. It arranges all the words in the dictionary alphabetically and prints them out so that they can be studied carefully. A number of words which we do not need for our kind of technical translation had been originally entered into the dictionary by Weidner and we decided to delete these or to change their translation to suit our needs. Now would be a good time to look at the Synonym Update option. This allows the translator to enter synonyms for words in the target language into the computer. For an example we could look at one of the French words for "car" -"auto". The computer asks us for the word type, which in this case is a noun. We enter three synonyms: voiture, véhicule and cabriolet. These are now entered into the computer's memory. When we look at revising the computer translation I shall show you how to put this to use.The translate function itself is very simple. One simply chooses a language pair, selects the translate option and enters the name of the file to be translated. Using the rules in the Weidner program and the words in its dictionary, the computer translates the file into the desired language. The progress of the translation can be followed using the Translation Process Monitor discussed earlier.To revise the translation, the Amender option is chosen. The text is displayed in both languages on the terminal screen, one above the other. It is very easy to see what the translation is and to decide what it should be. The terminal now functions as a word processor. The keys on the top two rows, the twelve keys between the typewriter keyboard and the numbers on the right as well as the keys on the left and right sides of the typewriter keyboard allow one to manipulate the text in any number of different ways. Words can be deleted and moved around. Entire phrases can be replaced and lines deleted. Here is also where our synonym entries from before come into play. Remember that we entered synonyms for the word "auto" in French. To display the synonyms for this word on the screen, one places the cursor at the beginning of the desired word, in this case "auto". The "ESC" button on the left side is pressed and the part of speech chosen: the number 1 for verbs, 2 for nouns, 3 for adjectives and 4 for adverbs. The synonyms will now appear at the bottom of the screen. The translator makes his selection by choosing the appropriate number and the original word is automatically replaced. In this way the translator can rapidly polish his text until he is happy with the translation.All that is left to do now is to remove the source language from the file by choosing one of the Amender functions called "Split Translated Dual-Language File". After this the translated text is ready for printing by the publications department of the company.It is really not worthwhile discussing the various problems of the system in great depth because in the first place they are much too numerous and secondly, problems will always exist. After one is solved, another is sure to appear. Take the sentence, "The rabbit stole the carrots from the old woman's garden". This should be translated by "Le lapin a volé les carottes du jardin de la vieille femme." and not by "Le lapin a volé les carottes du vieux jardin de la femme." The system is not created with an inherent understanding of language, as we are, so a result such as this should not be too much of a surprise. We know from the context that "old" modifies "woman" and not "garden", but the computer knows nothing of context. In another case we receive the translation: "Ce brochure est conçu vous pour montrer opérer commensement la console" for: "This booklet is designed to show the operation of the console." This is certainly not very good French, but it is not all that bad either. It is now just a matter of using the word processor function of the terminal to change "ce" to "cette", to reverse the words "vous pour", to correct the word "commensement" and to then reverse "opérer comment". This is about as good a translation as can be expected of the Weidner system and probably of any other machine-aided translation system. What it does is give the translator a text, more or less grammatically correct, with the terminology entered into it by the translator himself, which can then be revised. Depending on the speed of the computer, it does this extremely quickly. What it boils down to is that the system is capable of producing a draft, which a translator can then upgrade faster than if he had done the whole thing from scratch. It saves time! Finally, what is required of the user? First and foremost, patience and an understanding of what the machine can and cannot do. It cannot translate a word which is not in the dictionary, for example. The system comes with a basic vocabulary entered into it by Weidner, but any specialized terminology must be entered by the end user. Any word not in the dictionary is left in the original language and the translator can translate it himself when revising the text. The first few months of working with the system are almost exclusively spent entering words into the dictionary. If one is running a vocabulary search, which in the end results in a list of 100 or so words, it can be very discouraging. A whole day can be spent entering these words into the dictionary -not a lot of fun. But it certainly is not time wasted because these words will remain in the dictionary forever and one will never have to enter them again. | null | Main paper:
:
done overnight. D. Dictionary Update. From the translator's point of view, this is the key to the whole system. This is where the translator enters his vocabulary. This ensures that his terminology remains consistent throughout his translations because once a word is entered into the computer's memory, it will always be translated in the same way. This is also where the computer is "taught" the grammar of the language. The translator tells the computer the word's gender and plural if it is a noun, its inflection if a verb, its agreement if adjective or adverb. This must be done precisely if the translation is to come out properly in the end. E. Listing Utility. This is used to obtain a print-out of one's dictionary. The dictionary can now be carefully studied to see if something has been entered incorrectly. F. Translate. This is self-explanatory and is probably the easiest function for the translator himself and the most complicated for the computer.G. Deferred Translation has the same function as Translate, but like Deferred Vocabulary Search, it can be done during off-hours so as not to interfere with other computer users.H. Synonym Update. This enters synonyms for words in the target language into the computer's memory. We will look at this more carefully later.I. Translation Process Monitor. This option allows the translator to see what the computer is working on. Active, future as well as finished jobs can be displayed. When an active job is finished, it disappears from the screen and the terminal "beeps" to let the translator know that the job is finished. The language pairs as well as the file names are displayed.J. Manager Utilities is a command which the translator doesn't need to use. We leave all of the system problems to the system manager.At Mitel the translator gets the English text in a pre-publication form. The basic text is there and needs only a few revisions from the technical writers and engineers. At this point, about six weeks before the final document is printed, the translator runs the text through the vocabulary search procedure. He finds out which words are unfamiliar to the computer and enters them into the dictionary. The procedure is quite simple. From the list of options, one chooses "B", Vocabulary Search. The computer asks you which file to search through. After the search has been completed, the results can be displayed on the terminal screen or printed out. Note that the line in which the word is located is also displayed. From this, the context of the word can be derived. The words can also be listed alphabetically or depending on their frequency in the text; however, we have found that the context option is more useful. Now the translator enters the unknown words into the dictionary. This is the most complicated and most important part of the Weidner system. The information given the computer must be exact for the translation to come out properly in the end. Let us look at a few examples.For an example of a noun entry, let us look at the Spanish word for "house", which is "casa". The first question the computer asks is, "Is this word a homograph?" that is, does this word have more than one translation? The answer in this case is yes since the English can also be translated by "albergar", which is the verb "to house". For our purposes, let us look only at "casa". We enter the word in the appropriate space. Following this, one is asked the following questions:-Part of Speech (Verb, Noun, Adjective, Adverb, etc.). We have just received a new version of the Weidner system. In the old version, we could only enter verbs, nouns, adjectives and adverbs. Now we can also enter prepositions, conjunctions and so on. However, the old four are still the most important and I shall concentrate on these.-Gender (Masculine, Feminine, Either).-Number. This is a question relating to special nouns, which are always plural in the original but always singular in the target language or vice-versa.-Agency (Human, Group, Body Part, Animal, Inanimate, Concrete, Abstract). Weidner has yet to47 satisfactorily explain these categories and the reasons for them to us, since most of them have no direct effect on the translation.-Is this a proper noun? (No or Yes.) -Is this a noun of time?-Is this a noun of place?-Does this translation present the "ING" form of the source word? "ING" nouns present a special problem because they can be interpreted by the computer as a verbal form. Nouns such as "building" must therefore be confirmed as such in this step.Once these questions have been answered, a check step or a chance to make changes is included should one decide that there is something needing correction.After this, the computer goes on to the next step. Here the translator is asked how the translation corresponds to the source language, that is, in what way is the translation influenced by the source word. In this case the correspondence is one to one, that is, the target word inflects according to how the source word inflects. This is more important with idioms, which we will look at later.On the next page the computer asks how the word inflects. It gives examples and asks if the words inflect like the examples. Sometimes, three or four examples are given and the translator must choose the one which inflects most like his word. We shall see an example of this later.After this the computer goes back to the first page and enters the word into the dictionary.For an example of a verb let us look at the German translation of the word "bite" which is "beißen". I have purposely chosen this verb because it is irregular. A very specific question is asked for irregular verbs, as we shall see in a moment.At the beginning we go through the same procedure as with a noun. Is this word a homograph? We answer yes for two reasons. First, the word "bite" can also be a noun. Secondly, we enter two forms for verbs. The first is the simple translation, "beißen"; the second is a variation on the translation. We enter "der beißen", which will become the translation for "the dog biting the man". The translation will appear as a relative clause, "der Hund, der den Mann beißt", introduced by the relative pronoun "der". For the purpose of this demonstration we will leave out the noun form.After we enter the translations, the computer again asks us a number of questions about the first translation, "beißen", including part of speech, which we saw in the first example, and the following questions designed specifically for verbs:-Agency (None, Direction, Location). -Is the past participle formed with "haben" or "sein"? -Character length of the separable prefix. Here one enters the number 0 through 15. In some German verbs, such as "hinterherlaufen", which means "to run after" or "to chase", a prefix exists, which is separated from the main body of the verb in certain tenses. In this case, the 9-character prefix, "hinterher" is separated from the main part of the verb, "laufen", and placed at the end of the sentence.-Does the verb include "ING" adjectives before a noun? In this case, the answer is yes, because the adjective, "biting", does occur before a noun: "the biting dog".-Does the verb include "ING" adjectives after a noun? -Does this form include "ED" adjectives? In this case, the answer is no, but an example is the verb "to desire" where the adjective is "desired".Again the chance to make any necessary changes is given before we go to the next step.Here the computer asks for a cross reference, or the correspondence, between the two languages, which we will come back to.In the next step, three questions are asked.-Is the verb weak or strong, that is, regular or irregular? If the answer is weak, or regular, then the next two questions are left out. However, if the answer is strong, or irregular, then the following two questions are asked.-Does the verb have an inseparable prefix, as is the case with some German irregular verbs?-Without a prefix, this verb inflects most like which of the following? Forty-two examples are given and the translator must decide which, if any, of the following verbs inflects like his. In this case it is No. 5, "leiden". In both verbs the "ei" in the infinitive changes to "i" in the imperfect and perfect: "leiden-littgelitten" and "beißen-biß-gebissen". After this, the computer goes to the next form of the verb, "der beißen", which is entered in exactly the same manner as the simple verb translation. Most of the questions which have just been answered are left out, so that not too much duplication of answers and time-wasting takes place.For an example of an adjective let us take a look at the French translation for the word "happy", which is "heureux". Again the question concerning homographs is asked, after which the translation is entered.After entering the part of speech into the computer, the following questions are put to the translator.-Does this adjective reorder? That is, if the adjective precedes the noun in the source language, does it follow the noun in the target language? "The happy man" becomes "L'homme heureux". The answer is yes.-If the adjective precedes an infinitive, insert: (Nothing, A, or De).-Is this adjective always plural? An example of this is "plusieurs" (many).Again the translator is allowed to make any changes at this point. From here he again goes to the next step, which is cross-referencing. After this he is asked one question concerning the declension of the adjective:-This adjective declines most like: (Doesn't Decline, Heureux, Vieux, Faux, Doux, Index). The computer wants to know how the adjective inflects in the feminine and plural. "Heureux" becomes "heureuse" in the feminine, but "vieux" becomes "vieille" and "faux" becomes "fausse" and so on. In our case, "heureux" declines like "heureux", so it is not a difficult choice at all. Different examples are given for adjectives not ending in "x".After this has been answered, the translation is written into the dictionary. Adverbs are entered in the same way and are very simple for the computer to handle.Let us now have a look at an example of an idiom. An idiom in this case is any phrase of two words or more. Since I am more familiar with German, we will use the example, "to rain cats and dogs", in German: "in Strömen regnen".First of all, all individual words in the English idiom must be separately entered into the dictionary, whether this individual meaning has anything to do with the idiom or not.Then we proceed as with any other entry. In this case, the idiom as a whole is a verb. Therefore, the same49 questions as for a verb are asked by the computer. The next step is an important step for idioms. This is the cross-referencing, which was mentioned a number of times earlier. Here we ask ourselves, which word, if any, in the translation corresponds to the source language. Looking under "CR-REF", we decide that "in" does not correspond to anything, so we enter a "0". We also decide that "Strömen" does not correspond to anything and we enter another "0". However, "regnen" corresponds to "rain", so we enter a "1", which is the number next to "rain", in the appropriate space. Now "regnen" will do in German what "rain" does in English; that is, if "rain" is plural in English, then "regnen" will be plural in German. After this, the computer asks us what type of word each individual word in the translated idiom is. Beside "in" we enter "other" because it is not a noun, verb, adjective or adverb. However, beside "Strömen" we also enter "other". You may now say to yourself that this is a noun. That is true, but in the idiom it never changes; it doesn't inflect in any way whatsoever. If we told the computer that it was a noun, it would try to inflect it and probably make a mistake in the translation. Beside "regnen", however, we must enter "verb".On the bottom left of the screen the computer then asks us for the key word in the idiom. It does not want the most important word in the idiom, but the word which occurs less frequently than the others in the computer's dictionary. This helps to speed up the computer's search for the idiom during translation. It has no direct effect on the translation itself and any word can be selected without causing any translation problems.Below this the computer asks the translator which source word governs the agreement with the target language. In this case it is no more than a confirmation of the cross reference above. We enter the number "1" because the source word, "rain", governs the agreement. However, in an idiom where there is absolutely no relationship between any of the words in the two languages, that is, where the cross references are all "0", one of the words in the source still influences the target language as to number, tense, etc. This is taken care of here.After this the idiom is written into the dictionary. As I'm sure you can understand, the dictionary entries must be done very carefully and accurately in order to receive the best possible computer translation. At Mitel we have found that idioms are an extremely important part of the translation procedure. You see, machine assisted translation is for the most part word replacement and we all know the problems that idiomatic expressions cause, even in regular translation. Therefore, it is important to enter as many idioms into the dictionary as possible. There is only one thing to watch out for and that is an idiomatic expression which also has a meaning if the words are taken separately. An example is the phrase, "to ring a bell". It literally means "ring a bell", but the idiom means "to arouse a response", as in "that name rings a bell". If the phrase has been entered as an idiom, the first meaning will never be translated. You, the translator, must therefore decide which phrase will occur most often in your work and make the entry accordingly. That is the nice thing about the Weidner dictionary. If there is a translation that will almost certainly never come up in one's field, one can delete it from the existing dictionary or leave it out altogether. Therefore, at Mitel we would not enter the idiom "to ring a bell" because we will never need to have that phrase translated, but in the telephone business, bells literally ring all the time.To see which words or phrases are already in the dictionary, the Listing Utilities option is selected. It arranges all the words in the dictionary alphabetically and prints them out so that they can be studied carefully. A number of words which we do not need for our kind of technical translation had been originally entered into the dictionary by Weidner and we decided to delete these or to change their translation to suit our needs. Now would be a good time to look at the Synonym Update option. This allows the translator to enter synonyms for words in the target language into the computer. For an example we could look at one of the French words for "car" -"auto". The computer asks us for the word type, which in this case is a noun. We enter three synonyms: voiture, véhicule and cabriolet. These are now entered into the computer's memory. When we look at revising the computer translation I shall show you how to put this to use.The translate function itself is very simple. One simply chooses a language pair, selects the translate option and enters the name of the file to be translated. Using the rules in the Weidner program and the words in its dictionary, the computer translates the file into the desired language. The progress of the translation can be followed using the Translation Process Monitor discussed earlier.To revise the translation, the Amender option is chosen. The text is displayed in both languages on the terminal screen, one above the other. It is very easy to see what the translation is and to decide what it should be. The terminal now functions as a word processor. The keys on the top two rows, the twelve keys between the typewriter keyboard and the numbers on the right as well as the keys on the left and right sides of the typewriter keyboard allow one to manipulate the text in any number of different ways. Words can be deleted and moved around. Entire phrases can be replaced and lines deleted. Here is also where our synonym entries from before come into play. Remember that we entered synonyms for the word "auto" in French. To display the synonyms for this word on the screen, one places the cursor at the beginning of the desired word, in this case "auto". The "ESC" button on the left side is pressed and the part of speech chosen: the number 1 for verbs, 2 for nouns, 3 for adjectives and 4 for adverbs. The synonyms will now appear at the bottom of the screen. The translator makes his selection by choosing the appropriate number and the original word is automatically replaced. In this way the translator can rapidly polish his text until he is happy with the translation.All that is left to do now is to remove the source language from the file by choosing one of the Amender functions called "Split Translated Dual-Language File". After this the translated text is ready for printing by the publications department of the company.It is really not worthwhile discussing the various problems of the system in great depth because in the first place they are much too numerous and secondly, problems will always exist. After one is solved, another is sure to appear. Take the sentence, "The rabbit stole the carrots from the old woman's garden". This should be translated by "Le lapin a volé les carottes du jardin de la vieille femme." and not by "Le lapin a volé les carottes du vieux jardin de la femme." The system is not created with an inherent understanding of language, as we are, so a result such as this should not be too much of a surprise. We know from the context that "old" modifies "woman" and not "garden", but the computer knows nothing of context. In another case we receive the translation: "Ce brochure est conçu vous pour montrer opérer commensement la console" for: "This booklet is designed to show the operation of the console." This is certainly not very good French, but it is not all that bad either. It is now just a matter of using the word processor function of the terminal to change "ce" to "cette", to reverse the words "vous pour", to correct the word "commensement" and to then reverse "opérer comment". This is about as good a translation as can be expected of the Weidner system and probably of any other machine-aided translation system. What it does is give the translator a text, more or less grammatically correct, with the terminology entered into it by the translator himself, which can then be revised. Depending on the speed of the computer, it does this extremely quickly. What it boils down to is that the system is capable of producing a draft, which a translator can then upgrade faster than if he had done the whole thing from scratch. It saves time! Finally, what is required of the user? First and foremost, patience and an understanding of what the machine can and cannot do. It cannot translate a word which is not in the dictionary, for example. The system comes with a basic vocabulary entered into it by Weidner, but any specialized terminology must be entered by the end user. Any word not in the dictionary is left in the original language and the translator can translate it himself when revising the text. The first few months of working with the system are almost exclusively spent entering words into the dictionary. If one is running a vocabulary search, which in the end results in a list of 100 or so words, it can be very discouraging. A whole day can be spent entering these words into the dictionary -not a lot of fun. But it certainly is not time wasted because these words will remain in the dictionary forever and one will never have to enter them again.
Appendix:
| null | null | null | null | {
"paperhash": [],
"title": [],
"abstract": [],
"authors": [],
"arxiv_id": [],
"s2_corpus_id": [],
"intents": [],
"isInfluential": []
} | null | 519 | 0.011561 | null | null | null | null | null | null | null | null |
28d9357084ee2f1e4d195c50a38933c84089c7a5 | 236999876 | null | Economic aspects of machine translation | After a description of various marketable MT and MAT products and a review of some scenarios for the design of MT and MAT working stations, an analysis of the economic aspects of MT and MAT, as compared with human translation, is made : -investments in systems and dictionaries (new developments and updating), and operating costs -staff involved and time delay of service. Introduction. One usually associates the concept of the economic aspects of a marketable product with such notions as cost, investment, price, time delay, quality, demand, etc. This is generally sufficient when one deals with a product which constitutes a specific example of a family already on the market: e.g. a new brand of coffee or the new model of a car. | {
"name": [
"Van Slype, Georges"
],
"affiliation": [
null
]
} | null | null | Translating and the Computer: Practical experience of machine translation | 1981-11-01 | 2 | 2 | null | This is generally sufficient when one deals with a product which constitutes a specific example of a family already on the market: e.g. a new brand of coffee or the new model of a car.When one has to consider an entirely new product, such as machine translation, one must first try to think about its potential uses:-what kind of service might it render ? -in which technical and organizational conditions could it be implemented ?We tried to answer precisely these questions, among others, in a market study of translation which we completed a few weeks ago for the Commission of the European Communities (Van Slype et al.-1981) .2. Marketable MT and MAT products (machine translation and machine aided translation).Machine translation may, in fact, be a basis for three different kinds of product:-rough translation of free text without further correction (i.e. without post-editing) : such translation may be most valuable to people wishing to get a rapid acquaintance with a journal article, a conference paper, or a working group paper, written in a language they do not understand, in the (numerous) cases where they cannot afford the cost and/or the time delay for a normal human translation process.According to some indications, this potential market (very frequently and imperfectly satisfied nowadays by an oral translation made by a colleague, a family member or ... a translator) represents some 30% of the actual market for formal, written, human translation. The conditions for the realisation of this market are:. cost and time delay significantly reduced (e.g. one order of magnitude lower than human translation) . average text intelligibility* of MT above 75% (versus 98 to 99% for source text) . easy access to a MT system -translation of free text, followed by careful correction (called post-editing) in order to reach the same quality (mainly intelligibility, fidelity, grammatical correctness) as human translation, this product concerns directly the market for human translation, where it brings the kind of job reassessment usually associated with a technological change; it permits the carrying out of the same volume of work with a reduced staff of translators, converted into post-editors (or a higher volume of translation with the same staff), at about the same cost as or a little bit less than human translation; but with considerable reduction in time delay. The conditions for the development of this machine-aided translation product are:. acceptability to the human translator, which appears negotiable when the quality of the MT system is such that the correction (i.e. post-editing) ratio is lower than 20% (1 correction every five words) and when the human translator can be associated with the upgrading of the MT system . use of a word-processing subsystem, allowing easy input of the source text and low cost and fast output of the revised translation. easy access to a MT system, with cost and time delay significantly lower than human translation -translation of controlled text (i.e. pre-edited text, or text written with a limited vocabulary and a limited number of grammar rules), followed by careful post-editing; this product may be used by organizations producing their own texts, and thus able to impose from the start, writing rules that would allow the post-editing efforts to be minimized (e.g. instruction manuals, maintenance manuals, ...); the conditions for the development of such systems are :. control of the source texts . access to a MT system with costing and time delay features as for the other MT or MAT products . design of the controlled language subsystem and of the MT subsystem such that the correction ratio is brought to a low level (say : 5 to 10%), permitting a reduction not only in time delay (as with MAT of free text) but also in costs.3. Design of the MT and MAT working station.After this description of three marketable MT and MAT products, let us review how they could be implemented, i.e. how a MT/MAT working station would look.* By intelligibility, I mean the subjective evaluation of the degree of clarity and comprehensibility; this evaluation is made for each sentence in its context, and is then averaged for the whole sample.The first MT systems were batch processing ones: this means that you had first to keypunch your source text onto punched cards or magnetic tape; to feed an input device in the immediate surroundings of the computer with these cards or tape; and to wait (a few hours or a few days) for the willingness of the computer operator to load the translation programs and the dictionaries and to launch the processing. You then received, by mail, a print-out of the rough translation; you had to revise that print-out and to insert your corrections. The post-edited translation was then given to a typist for final typing.This process was good enough for experimental trials and pilot operations, but was clearly not satisfactory for operational conditions (example: Systran I).Nowadays, a MT or MAT system is linked to a word processing subsystem, which allows you :-to have your source text automatically input to the computer . either through a magnetic reading device, if the text is available in magnetic form (e.g. when the text was initially produced by a WP system, or by a photocomposing system, or when it is already stored in a computer to which you have access). or through an optical reader, if the source text is available in printed form and if the character font used is one of those that your OCR (optical character recognition) device can deal with -or, if your input text is available only in handwritten form, or in stenographic form, or in sound recording form, to type it on the keyboard of a terminal linked to a MT computer facility -to enter, on the same terminal, special meanings or specific terminology -to obtain, on your terminal, either on a visual screen, or on paper, or on both, the rough translation -to revise the rough translation . either directly on the terminal, by typing the corrections yourself . or first on paper, and then by having a typist keyboard your corrections on the terminal -to receive the revised translation, printed at your terminal, with a typewritten quality, and in the same lay-out as that of the source text -or to receive the revised translation, on a magnetic tape, ready for photocomposing, if the translation is to be typeset.The minimal equipment required in an automated translation service is the terminal, used -to input the source text, some complementary pieces of lexicon and the post-editing corrections -and to output the rough translation and the revised translation.We have seen that this equipment would be connected to a MT facility. Now, as regards this MT facility, there are several kinds of organizational design to be envisaged:first, the MT facility may be located within the translation service; in that case, the MT facility consists of a minicomputer, totally dedicated to MT application. For instance, Weidner Communications Inc., in Utah, USA, markets a turnkey system, composed of:. a Digital Equipment minicomputer . a few terminals. the MT software . a minimal dictionary (+ 10,000 entries)-second, the MT facility may be located within the translation service parent organization, in its computer centre; in that case, the computer will most generally not be dedicated to MT, but will run MT programs in conjunction with other programs, in a time-sharing mode. For instance, Systran Institute GmbH, in Stuttgart, markets a Systran software package, to be processed on your own computer, and including:. the MT software. a dictionary -thirdly, the MT facility may be located in a specialized organization, called a MT host, to which your terminal would be connected through a public telecommunication network (leased line in the case of heavy traffic, switched line in the case of light traffic). For instance, the Commission of the European Communities envisages launching in the medium run (2 to 3 years), a MT service, based on a MT host computer, and available to all interested users of EURONET-DIANE. This system will allow users who have searched a bibliographic data base on DIANE, to have the abstracts found, written in English or French, automatically sent to the MT host, via Euronet, and to get a rough translation in, respectively, French or Italian, or English.The two first organizational structures are open to any existing translation bureau or translation department, or to any newcomer in the profession; for instance CISI, the French Compagnie Internationale de Services en Informatique, has envisaged launching in Canada a translation service based on MT, with Systran II and human post-editing, but has finally given up its project, due to opposition from the Canadian government.The latter organizational system is, for the moment, only designed for final users in a specific area (rough translation of bibliographic abstracts), but could be thought of, as ultimately a MT time-sharing service, open to any client.Now that we have some ideas of the organization, the process and the final output of MT and MAT systems, we may go on to their actual economic aspects.We shall first consider the cost aspects, then examine the time delay for the service, and finally conclude with some consideration of the staff involved.As for any product, we have to look at:-investment costs -operating costs. The investment costs of a MT or MAT system may include:-hard: the necessary equipment -soft: the translation programs -dictionary: the lexicon(s) used by the translation programs.4.11 Hardware.The hardware includes:-the cost of terminal equipment which, in every case, would be born by the translation service -the computer,. which would be an investment for the translation service in the case of organizational structure n° 1 (dedicated facility). whose costs would be charged to the translation service in proportion to its actual use (i.e. they are operating costs, and not investment costs) in the case of the two other organizational structures (internal or external time-sharing).4.12 Software.The acquisition of the MT software should be considered from two points of view:-the initial investment of the risk taking initiator of a MT system, be it:. its private inventor, as in the case-of Prof. The development costs of Systran in USA are estimated to be 5 million dollars in a twenty year period. The research and development budget of EUROTRA was settled at £7 million, to be shared between the Community and the Member StatesThe direct costs include three main items:-work on the terminal -MT processing on the computer -human post-editing (in MAT).The terminal processing may include: The cost of input of the source text may vary from a maximum when a manual keyboarding has to be done, to a minimum when optical character recognition is available and possible, or, even more so, when the text is already available in magnetic form. The latter case is likely become more general in the future, as both the author of the original text and the translation service will have their documents typed on word processing equipment (for in-house data) and as more and more books and journal articles are photocomposed, and these are made available, as a by-product, in electronic form.-NB: the saving in input cost, when the text is available on machine readable form, is partly cancelled out by some more computer processing (text reformatting).In order to compute the terminal processing cost and to charge it to the correct input and output jobs, one must:-compute the cost of the terminal(s), e.g. per annum (rental, or depreciation plus maintenance)-compute the salary plus employer charges of the operator(s) -cumulate both costs, and divide the total amount by the total number of hours worked each year by all the terminals : this will give the hourly rate for a terminal -estimate, through observations, and/or sampling, the fraction of time devoted on all terminals to each of the five above mentioned jobs -apply these fractions to the cumulated costs of equipment and personnel: to obtain the amounts to be charged to each job -divide the three first amounts by the number of hundreds of words translated (with and without post-editing) to obtain the unit costs of:. input of source text . adjustment of vocabulary. output of rough translation -divide the two last amounts by the number of hundreds of words translated and post-edited, to obtain the unit costs for:. input of corrections . output of post-edited translations.Computer processing is usually invoiced to in-house users as well as to outside organizations according to rather intricate formulae, taking into account resources used (central processing unit time, number of input-output operations, storage volume, ...); these formulae are very easy for computer people to manipulate, but have the very bad characteristic that they cannot be checked by the users ! So it is better to try and negotiate with the computer department a unit price on some traceable output, e.g. per 100 translated words, or better per 100 source words. In these cases, some attention must nevertheless be paid to the fact that in most MT systems, individual punctuation marks are counted as full words; if one is to compare the cost of MT with that of human translation, where tariffs are usually based on 100 (actual) words or on text lines, some adjustments are required.When the computer is dedicated to MT, then the processing cost is easier to compute: divide the total computer costs (annual rentalcomputer maintenance usually included -or depreciation, plus maintenance; salary of computer operator, if any -many minicomputers are operated by their users), by the number of hundreds of source words submitted to MT during the same period.The salary of the post-editor(s) (employer's charges included) is divided by the number of hundreds of source words submitted to MT and to human post-editing during the same period. If the post-editors are at the same time operators of the terminals for input of source text, input of new vocabulary, and/or input of post-editing corrections, their costs are only counted once:-as operators for the two first kinds of input -as post-editors for the third kind of input.Indirect costs normally include:-depreciation of the investments -overheads.4.221 Depreciation of the investments.We have already taken into account the depreciation (or the rental) of the hardware in the operating costs, because:-a normal computer processing cost, charged in-house or by a subcontractor, includes that depreciation -terminal equipment in a translation service is normally dedicated to translation, and may thus be directly amortized on that activity.The investments to be considered here are:-MT software acquisition -vocabulary building.These investments may be depreciated according to the usual rules, -over a five year period -on a linear, progressive or decreasing method.When part or all of these investments are paid through an annual and/or volume licence fee, the value of the depreciation is of course replaced by that of the licence fee.4.222 Overheads.Overheads (managerial staff, office space, ...) are of course to be taken into account when computing the cost of translation.However, as the overheads are about the same in MT or HT, one does not usually take them into consideration when comparing the two kinds of translation.4.3 Examples.Our own evaluation at the European Commission, in 1976, of the Systran English-French MAT system, compared with HT, gave the following results (cost per 100 words of source text, software investment cost not included).An evaluation of the TAUM-Aviation project, in Canada in 1980, gave comparable results, except for the cost of MT processing, which appeared very high -another reason for the rejection of the system. (Cost per 100 words, software investment cost not included) (Gervais -1980).WTCC (World Translation Company of Canada), the company which developed Systran II and which markets it in Canada, published in 1980 a cost comparison of HT and MAT (WTCC -1980) ; on the basis of:-1 million words to be translated per year -1,666 words/day/human translator -5,000 words/day/post-editor HT costs £10.2 per 100 words MAT costs £5.3 per 100 words (including data capture, computer processing, but not depreciation of software and dictionary).Systran Institute GmbH estimates, that in 1980, on the basis of -1 million words/year -1,800 words/day/translator -5,400 words/day/post-editor HT costs £5.9 per 100 words MAT costs £2.1 per 100 words (software and dictionary depreciation not included; data capture not included).4.4 Break-even point and pay back period.4.41 Formulae.In order to compute the limit of profitability of a MT or MAT system, one should establish the number of words necessary to be translated, to write off investment costs.A rough formula would be:where: Now if a given translation service has N words to translate per annum in a given language pair, the investment is paid off within one year, and the formula is correct.N =But if the activity is less than N, the investment will become profitable only after two or several years, and one should take into account the interest rate on the capital invested; the formula becomes more intricate Taking into account our estimates for Systran costs at the European Commission, we have:-revised human translation in a public institution : £0.0854 per word -post-edited machine translation in the same institution : £0.0607 per word -investment cost to the Commission for the Systran E-F software and for an E-F dictionary in the field of food and agriculture : £187,000-break-even point:187,000 ---------------= 7,570,000 words 0.0854 -0.0607 -as the Commission does not have such a volume of translation in that language pair and in that field, the other formula to apply (supposing n = 1,900,000 words/year and i = 10%), is: Thus, the pay back period is between five and six years.ECONOMIC5. Time delay.One main advantage of MT is that it permits a considerable decrease in the time delay between a translation request and the delivery of the translation.The actual gain arises from several elements:-the form of the source text and data capture equipment available: the keyboarding time delay will be:. zero or nearly zero if the source text is in magnetic form and may be input as such to the computer . a few hours if the source text is in typed or printed form and if one has adequate OCR equipment . a few hours to a few days, or even weeks, if the text has to be manually keyboarded -the organization mode of computer processing:. in batch mode, the translation processing delay may be counted in hours or days . in interactive mode, the delay can be expressed in minutes or hours . without word processing, the final revised document has to be wholly typed again, if a perfect machine copy is necessary, and this may require supplementary hours or days . with word processing, only the corrections have to be typed, and the final typing is carried out automatically, within a few minutes or hours -the kind of translation product required :. zero supplementary post-editing time delay for rough translation . short post-editing for pre-edited text . longer post-editing for free text needing careful revision.Because all these elements are closely connected, it is difficult to define a typical MT and MAT time delay, to be compared with a typical HT time delay. Nevertheless, experience so far shows significant decrease in delay.6. Translator staff.Will MT and MAT decrease the number of translators in the medium or long term?This question is rather difficult to answer. Several points must indeed be taken into account:-the elasticity of the demand to price fluctuations:. an elastic demand is one which increases considerably in response to a price decrease, and vice-versa (e.g.: computer). an inelastic demand is one which remains steady, even when prices go up and down (e.g.: bread). in the case of translation, the actual demand is growing at a very fast rate (approximately 10% per annum) and a potential market (approximately 30% of the actual one) is waiting for a substantial price decrease before it becomes overt; demand for translation thus appears elastic towards a better processing effectiveness -the existence of a substitute product, better fitted to market needs . the consumption of bread, for example, is considerably reduced when a rising standard of living allows most people to afford better liked foods, such as meat, vegetables and fruits . substitutes for translation are either no interlanguage communication at all (a solution which appears impossible in the prevailing economic, political, cultural and technical environment) or better language competence (a solution which appears limited, in spite of all language training); thus, in the medium and long term, translation does not have serious competitors -the effects, direct and indirect, on employment, of technological progress and better productivity . an improvement in productivity is usually associated, in the short term, with a lowering in employment; but, in the medium and long term, this effect may be compensated for by an increase in consumption, and thus may be overcome by an increase in employment, if demand is sufficiently elastic. For instance, there are considerably more people employed now in the computer industry, than ten years ago, because computers are much cheaper now, and thus much more in demand. A huge rise in productivity, on the other hand, in a saturated market, leads to unemployment (as is the case at the present time in the motor industry).As far as translation is concerned, it appears that:-the market is far from saturated -the demand is ready to react positively to an improvement in price and/or time delay -no substitute would replace the demand for translation.In conclusion, it does not seem that the important increase in productivity brought about by MT and MAT should create unemployment among translators.the investment by a translation service. In any case, the ordinary translation service is quite unable to support such huge investments; when it acquires a MT system, or the right to use a MT system, it supports part of that investment through the payment of one or several of following charges:. initial payment (truly an investment) . fee per unit of time or per unit of use (actually an operating cost). maintenance (an investment or an operating cost, according to bookkeeping habits; usually an operating cost).These charges may cover either the software only, or, as already mentioned, the hard, the soft and a basic vocabulary.For instance, -Weidner Communications Inc. was quoting, in 1980, £130,000 for a turnkey system for one language pair, including :. a Digital Equipment minicomputer . four terminals . MT and WP (word processing) software . current dictionary (10,000 entries).The same system, but with 20 terminals, costs £220,000. In addition, there is a price of £75,000 per supplementary language pair and 1% per month for maintenance.-Systran Institute GmbH was offering, in 1980, a "Systran licence contract", including, for one language pair:. fixed cost for dictionary update (5,000 technical terms, by Systran Institute, 10,000 by the client), software adaptation (to implement the system on the client's computer), testing and training £22,000. annual licence fees £25,000. quantity licence fees * first million words/year included in annual licence fees * from 1 to 5 million words/year : £1.2 per 100 words. annual system maintenance £5,000.4.13 Dictionary.The dictionary is a very important item when investing in a MT system. It appears that, in many cases, the volume of the specialized terminology used for a given final patron greatly exceeds the size of the standard vocabulary, whose initial building costs may be shared between many users.For instance:-the English-French standard vocabulary delivered by Prof. Toma to the Commission was found to be almost entirely useless for the Commission environment-Weidner supplies a very short standard dictionary (a few thousand terms) with its turnkey systems -the specific dictionary of Systran Russian-English for the American Air Force includes more than one million entries; the specific dictionary of Systran English-French of the European Commission numbers more than hundred thousand entries in the fields of food and agriculture, science, technology and administration.Our own evaluation of the English-French Systran, in 1976, showed that one average entry in the dictionary of that system:-requires 15 minutes (terminological research, linguistic coding and data capture) -costs £3.2 (manpower and computing time) (£1 = 75 BF).The Systran Institute GmbH quotes, in 1980, £0.7 per entry in the technical dictionary, based on text submitted by the client. The cost of the vocabulary seems to be a very important feature of a MT system; a high level of technical sophistication of a MT system may improve the quality of translation, but may also lead to impossibly high costs: the Canadian TAUM-Aviation system was rejected by an evaluation team, among other reasons, because of too high a cost for the building of dictionaries: 3 h 45 and £23 per entry ! 4.2 Operating costs.The operating costs of MT and MAT include direct costs and indirect costs. | null | null | null | null | Main paper:
business organization.:
The minimal equipment required in an automated translation service is the terminal, used -to input the source text, some complementary pieces of lexicon and the post-editing corrections -and to output the rough translation and the revised translation.We have seen that this equipment would be connected to a MT facility. Now, as regards this MT facility, there are several kinds of organizational design to be envisaged:first, the MT facility may be located within the translation service; in that case, the MT facility consists of a minicomputer, totally dedicated to MT application. For instance, Weidner Communications Inc., in Utah, USA, markets a turnkey system, composed of:. a Digital Equipment minicomputer . a few terminals. the MT software . a minimal dictionary (+ 10,000 entries)-second, the MT facility may be located within the translation service parent organization, in its computer centre; in that case, the computer will most generally not be dedicated to MT, but will run MT programs in conjunction with other programs, in a time-sharing mode. For instance, Systran Institute GmbH, in Stuttgart, markets a Systran software package, to be processed on your own computer, and including:. the MT software. a dictionary -thirdly, the MT facility may be located in a specialized organization, called a MT host, to which your terminal would be connected through a public telecommunication network (leased line in the case of heavy traffic, switched line in the case of light traffic). For instance, the Commission of the European Communities envisages launching in the medium run (2 to 3 years), a MT service, based on a MT host computer, and available to all interested users of EURONET-DIANE. This system will allow users who have searched a bibliographic data base on DIANE, to have the abstracts found, written in English or French, automatically sent to the MT host, via Euronet, and to get a rough translation in, respectively, French or Italian, or English.The two first organizational structures are open to any existing translation bureau or translation department, or to any newcomer in the profession; for instance CISI, the French Compagnie Internationale de Services en Informatique, has envisaged launching in Canada a translation service based on MT, with Systran II and human post-editing, but has finally given up its project, due to opposition from the Canadian government.The latter organizational system is, for the moment, only designed for final users in a specific area (rough translation of bibliographic abstracts), but could be thought of, as ultimately a MT time-sharing service, open to any client.Now that we have some ideas of the organization, the process and the final output of MT and MAT systems, we may go on to their actual economic aspects.We shall first consider the cost aspects, then examine the time delay for the service, and finally conclude with some consideration of the staff involved.As for any product, we have to look at:-investment costs -operating costs. The investment costs of a MT or MAT system may include:-hard: the necessary equipment -soft: the translation programs -dictionary: the lexicon(s) used by the translation programs.4.11 Hardware.The hardware includes:-the cost of terminal equipment which, in every case, would be born by the translation service -the computer,. which would be an investment for the translation service in the case of organizational structure n° 1 (dedicated facility). whose costs would be charged to the translation service in proportion to its actual use (i.e. they are operating costs, and not investment costs) in the case of the two other organizational structures (internal or external time-sharing).4.12 Software.The acquisition of the MT software should be considered from two points of view:-the initial investment of the risk taking initiator of a MT system, be it:. its private inventor, as in the case-of Prof. The development costs of Systran in USA are estimated to be 5 million dollars in a twenty year period. The research and development budget of EUROTRA was settled at £7 million, to be shared between the Community and the Member StatesThe direct costs include three main items:-work on the terminal -MT processing on the computer -human post-editing (in MAT).The terminal processing may include: The cost of input of the source text may vary from a maximum when a manual keyboarding has to be done, to a minimum when optical character recognition is available and possible, or, even more so, when the text is already available in magnetic form. The latter case is likely become more general in the future, as both the author of the original text and the translation service will have their documents typed on word processing equipment (for in-house data) and as more and more books and journal articles are photocomposed, and these are made available, as a by-product, in electronic form.-NB: the saving in input cost, when the text is available on machine readable form, is partly cancelled out by some more computer processing (text reformatting).In order to compute the terminal processing cost and to charge it to the correct input and output jobs, one must:-compute the cost of the terminal(s), e.g. per annum (rental, or depreciation plus maintenance)-compute the salary plus employer charges of the operator(s) -cumulate both costs, and divide the total amount by the total number of hours worked each year by all the terminals : this will give the hourly rate for a terminal -estimate, through observations, and/or sampling, the fraction of time devoted on all terminals to each of the five above mentioned jobs -apply these fractions to the cumulated costs of equipment and personnel: to obtain the amounts to be charged to each job -divide the three first amounts by the number of hundreds of words translated (with and without post-editing) to obtain the unit costs of:. input of source text . adjustment of vocabulary. output of rough translation -divide the two last amounts by the number of hundreds of words translated and post-edited, to obtain the unit costs for:. input of corrections . output of post-edited translations.Computer processing is usually invoiced to in-house users as well as to outside organizations according to rather intricate formulae, taking into account resources used (central processing unit time, number of input-output operations, storage volume, ...); these formulae are very easy for computer people to manipulate, but have the very bad characteristic that they cannot be checked by the users ! So it is better to try and negotiate with the computer department a unit price on some traceable output, e.g. per 100 translated words, or better per 100 source words. In these cases, some attention must nevertheless be paid to the fact that in most MT systems, individual punctuation marks are counted as full words; if one is to compare the cost of MT with that of human translation, where tariffs are usually based on 100 (actual) words or on text lines, some adjustments are required.When the computer is dedicated to MT, then the processing cost is easier to compute: divide the total computer costs (annual rentalcomputer maintenance usually included -or depreciation, plus maintenance; salary of computer operator, if any -many minicomputers are operated by their users), by the number of hundreds of source words submitted to MT during the same period.The salary of the post-editor(s) (employer's charges included) is divided by the number of hundreds of source words submitted to MT and to human post-editing during the same period. If the post-editors are at the same time operators of the terminals for input of source text, input of new vocabulary, and/or input of post-editing corrections, their costs are only counted once:-as operators for the two first kinds of input -as post-editors for the third kind of input.Indirect costs normally include:-depreciation of the investments -overheads.4.221 Depreciation of the investments.We have already taken into account the depreciation (or the rental) of the hardware in the operating costs, because:-a normal computer processing cost, charged in-house or by a subcontractor, includes that depreciation -terminal equipment in a translation service is normally dedicated to translation, and may thus be directly amortized on that activity.The investments to be considered here are:-MT software acquisition -vocabulary building.These investments may be depreciated according to the usual rules, -over a five year period -on a linear, progressive or decreasing method.When part or all of these investments are paid through an annual and/or volume licence fee, the value of the depreciation is of course replaced by that of the licence fee.4.222 Overheads.Overheads (managerial staff, office space, ...) are of course to be taken into account when computing the cost of translation.However, as the overheads are about the same in MT or HT, one does not usually take them into consideration when comparing the two kinds of translation.4.3 Examples.Our own evaluation at the European Commission, in 1976, of the Systran English-French MAT system, compared with HT, gave the following results (cost per 100 words of source text, software investment cost not included).An evaluation of the TAUM-Aviation project, in Canada in 1980, gave comparable results, except for the cost of MT processing, which appeared very high -another reason for the rejection of the system. (Cost per 100 words, software investment cost not included) (Gervais -1980).WTCC (World Translation Company of Canada), the company which developed Systran II and which markets it in Canada, published in 1980 a cost comparison of HT and MAT (WTCC -1980) ; on the basis of:-1 million words to be translated per year -1,666 words/day/human translator -5,000 words/day/post-editor HT costs £10.2 per 100 words MAT costs £5.3 per 100 words (including data capture, computer processing, but not depreciation of software and dictionary).Systran Institute GmbH estimates, that in 1980, on the basis of -1 million words/year -1,800 words/day/translator -5,400 words/day/post-editor HT costs £5.9 per 100 words MAT costs £2.1 per 100 words (software and dictionary depreciation not included; data capture not included).4.4 Break-even point and pay back period.4.41 Formulae.In order to compute the limit of profitability of a MT or MAT system, one should establish the number of words necessary to be translated, to write off investment costs.A rough formula would be:where: Now if a given translation service has N words to translate per annum in a given language pair, the investment is paid off within one year, and the formula is correct.N =But if the activity is less than N, the investment will become profitable only after two or several years, and one should take into account the interest rate on the capital invested; the formula becomes more intricate Taking into account our estimates for Systran costs at the European Commission, we have:-revised human translation in a public institution : £0.0854 per word -post-edited machine translation in the same institution : £0.0607 per word -investment cost to the Commission for the Systran E-F software and for an E-F dictionary in the field of food and agriculture : £187,000-break-even point:187,000 ---------------= 7,570,000 words 0.0854 -0.0607 -as the Commission does not have such a volume of translation in that language pair and in that field, the other formula to apply (supposing n = 1,900,000 words/year and i = 10%), is: Thus, the pay back period is between five and six years.ECONOMIC5. Time delay.One main advantage of MT is that it permits a considerable decrease in the time delay between a translation request and the delivery of the translation.The actual gain arises from several elements:-the form of the source text and data capture equipment available: the keyboarding time delay will be:. zero or nearly zero if the source text is in magnetic form and may be input as such to the computer . a few hours if the source text is in typed or printed form and if one has adequate OCR equipment . a few hours to a few days, or even weeks, if the text has to be manually keyboarded -the organization mode of computer processing:. in batch mode, the translation processing delay may be counted in hours or days . in interactive mode, the delay can be expressed in minutes or hours . without word processing, the final revised document has to be wholly typed again, if a perfect machine copy is necessary, and this may require supplementary hours or days . with word processing, only the corrections have to be typed, and the final typing is carried out automatically, within a few minutes or hours -the kind of translation product required :. zero supplementary post-editing time delay for rough translation . short post-editing for pre-edited text . longer post-editing for free text needing careful revision.Because all these elements are closely connected, it is difficult to define a typical MT and MAT time delay, to be compared with a typical HT time delay. Nevertheless, experience so far shows significant decrease in delay.6. Translator staff.Will MT and MAT decrease the number of translators in the medium or long term?This question is rather difficult to answer. Several points must indeed be taken into account:-the elasticity of the demand to price fluctuations:. an elastic demand is one which increases considerably in response to a price decrease, and vice-versa (e.g.: computer). an inelastic demand is one which remains steady, even when prices go up and down (e.g.: bread). in the case of translation, the actual demand is growing at a very fast rate (approximately 10% per annum) and a potential market (approximately 30% of the actual one) is waiting for a substantial price decrease before it becomes overt; demand for translation thus appears elastic towards a better processing effectiveness -the existence of a substitute product, better fitted to market needs . the consumption of bread, for example, is considerably reduced when a rising standard of living allows most people to afford better liked foods, such as meat, vegetables and fruits . substitutes for translation are either no interlanguage communication at all (a solution which appears impossible in the prevailing economic, political, cultural and technical environment) or better language competence (a solution which appears limited, in spite of all language training); thus, in the medium and long term, translation does not have serious competitors -the effects, direct and indirect, on employment, of technological progress and better productivity . an improvement in productivity is usually associated, in the short term, with a lowering in employment; but, in the medium and long term, this effect may be compensated for by an increase in consumption, and thus may be overcome by an increase in employment, if demand is sufficiently elastic. For instance, there are considerably more people employed now in the computer industry, than ten years ago, because computers are much cheaper now, and thus much more in demand. A huge rise in productivity, on the other hand, in a saturated market, leads to unemployment (as is the case at the present time in the motor industry).As far as translation is concerned, it appears that:-the market is far from saturated -the demand is ready to react positively to an improvement in price and/or time delay -no substitute would replace the demand for translation.In conclusion, it does not seem that the important increase in productivity brought about by MT and MAT should create unemployment among translators.
g. van slype:
the investment by a translation service. In any case, the ordinary translation service is quite unable to support such huge investments; when it acquires a MT system, or the right to use a MT system, it supports part of that investment through the payment of one or several of following charges:. initial payment (truly an investment) . fee per unit of time or per unit of use (actually an operating cost). maintenance (an investment or an operating cost, according to bookkeeping habits; usually an operating cost).These charges may cover either the software only, or, as already mentioned, the hard, the soft and a basic vocabulary.For instance, -Weidner Communications Inc. was quoting, in 1980, £130,000 for a turnkey system for one language pair, including :. a Digital Equipment minicomputer . four terminals . MT and WP (word processing) software . current dictionary (10,000 entries).The same system, but with 20 terminals, costs £220,000. In addition, there is a price of £75,000 per supplementary language pair and 1% per month for maintenance.-Systran Institute GmbH was offering, in 1980, a "Systran licence contract", including, for one language pair:. fixed cost for dictionary update (5,000 technical terms, by Systran Institute, 10,000 by the client), software adaptation (to implement the system on the client's computer), testing and training £22,000. annual licence fees £25,000. quantity licence fees * first million words/year included in annual licence fees * from 1 to 5 million words/year : £1.2 per 100 words. annual system maintenance £5,000.4.13 Dictionary.The dictionary is a very important item when investing in a MT system. It appears that, in many cases, the volume of the specialized terminology used for a given final patron greatly exceeds the size of the standard vocabulary, whose initial building costs may be shared between many users.For instance:-the English-French standard vocabulary delivered by Prof. Toma to the Commission was found to be almost entirely useless for the Commission environment-Weidner supplies a very short standard dictionary (a few thousand terms) with its turnkey systems -the specific dictionary of Systran Russian-English for the American Air Force includes more than one million entries; the specific dictionary of Systran English-French of the European Commission numbers more than hundred thousand entries in the fields of food and agriculture, science, technology and administration.Our own evaluation of the English-French Systran, in 1976, showed that one average entry in the dictionary of that system:-requires 15 minutes (terminological research, linguistic coding and data capture) -costs £3.2 (manpower and computing time) (£1 = 75 BF).The Systran Institute GmbH quotes, in 1980, £0.7 per entry in the technical dictionary, based on text submitted by the client. The cost of the vocabulary seems to be a very important feature of a MT system; a high level of technical sophistication of a MT system may improve the quality of translation, but may also lead to impossibly high costs: the Canadian TAUM-Aviation system was rejected by an evaluation team, among other reasons, because of too high a cost for the building of dictionaries: 3 h 45 and £23 per entry ! 4.2 Operating costs.The operating costs of MT and MAT include direct costs and indirect costs.
:
This is generally sufficient when one deals with a product which constitutes a specific example of a family already on the market: e.g. a new brand of coffee or the new model of a car.When one has to consider an entirely new product, such as machine translation, one must first try to think about its potential uses:-what kind of service might it render ? -in which technical and organizational conditions could it be implemented ?We tried to answer precisely these questions, among others, in a market study of translation which we completed a few weeks ago for the Commission of the European Communities (Van Slype et al.-1981) .2. Marketable MT and MAT products (machine translation and machine aided translation).Machine translation may, in fact, be a basis for three different kinds of product:-rough translation of free text without further correction (i.e. without post-editing) : such translation may be most valuable to people wishing to get a rapid acquaintance with a journal article, a conference paper, or a working group paper, written in a language they do not understand, in the (numerous) cases where they cannot afford the cost and/or the time delay for a normal human translation process.According to some indications, this potential market (very frequently and imperfectly satisfied nowadays by an oral translation made by a colleague, a family member or ... a translator) represents some 30% of the actual market for formal, written, human translation. The conditions for the realisation of this market are:. cost and time delay significantly reduced (e.g. one order of magnitude lower than human translation) . average text intelligibility* of MT above 75% (versus 98 to 99% for source text) . easy access to a MT system -translation of free text, followed by careful correction (called post-editing) in order to reach the same quality (mainly intelligibility, fidelity, grammatical correctness) as human translation, this product concerns directly the market for human translation, where it brings the kind of job reassessment usually associated with a technological change; it permits the carrying out of the same volume of work with a reduced staff of translators, converted into post-editors (or a higher volume of translation with the same staff), at about the same cost as or a little bit less than human translation; but with considerable reduction in time delay. The conditions for the development of this machine-aided translation product are:. acceptability to the human translator, which appears negotiable when the quality of the MT system is such that the correction (i.e. post-editing) ratio is lower than 20% (1 correction every five words) and when the human translator can be associated with the upgrading of the MT system . use of a word-processing subsystem, allowing easy input of the source text and low cost and fast output of the revised translation. easy access to a MT system, with cost and time delay significantly lower than human translation -translation of controlled text (i.e. pre-edited text, or text written with a limited vocabulary and a limited number of grammar rules), followed by careful post-editing; this product may be used by organizations producing their own texts, and thus able to impose from the start, writing rules that would allow the post-editing efforts to be minimized (e.g. instruction manuals, maintenance manuals, ...); the conditions for the development of such systems are :. control of the source texts . access to a MT system with costing and time delay features as for the other MT or MAT products . design of the controlled language subsystem and of the MT subsystem such that the correction ratio is brought to a low level (say : 5 to 10%), permitting a reduction not only in time delay (as with MAT of free text) but also in costs.3. Design of the MT and MAT working station.After this description of three marketable MT and MAT products, let us review how they could be implemented, i.e. how a MT/MAT working station would look.* By intelligibility, I mean the subjective evaluation of the degree of clarity and comprehensibility; this evaluation is made for each sentence in its context, and is then averaged for the whole sample.The first MT systems were batch processing ones: this means that you had first to keypunch your source text onto punched cards or magnetic tape; to feed an input device in the immediate surroundings of the computer with these cards or tape; and to wait (a few hours or a few days) for the willingness of the computer operator to load the translation programs and the dictionaries and to launch the processing. You then received, by mail, a print-out of the rough translation; you had to revise that print-out and to insert your corrections. The post-edited translation was then given to a typist for final typing.This process was good enough for experimental trials and pilot operations, but was clearly not satisfactory for operational conditions (example: Systran I).Nowadays, a MT or MAT system is linked to a word processing subsystem, which allows you :-to have your source text automatically input to the computer . either through a magnetic reading device, if the text is available in magnetic form (e.g. when the text was initially produced by a WP system, or by a photocomposing system, or when it is already stored in a computer to which you have access). or through an optical reader, if the source text is available in printed form and if the character font used is one of those that your OCR (optical character recognition) device can deal with -or, if your input text is available only in handwritten form, or in stenographic form, or in sound recording form, to type it on the keyboard of a terminal linked to a MT computer facility -to enter, on the same terminal, special meanings or specific terminology -to obtain, on your terminal, either on a visual screen, or on paper, or on both, the rough translation -to revise the rough translation . either directly on the terminal, by typing the corrections yourself . or first on paper, and then by having a typist keyboard your corrections on the terminal -to receive the revised translation, printed at your terminal, with a typewritten quality, and in the same lay-out as that of the source text -or to receive the revised translation, on a magnetic tape, ready for photocomposing, if the translation is to be typeset.
Appendix:
| null | null | null | null | {
"paperhash": [],
"title": [],
"abstract": [],
"authors": [],
"arxiv_id": [],
"s2_corpus_id": [],
"intents": [],
"isInfluential": []
} | null | 519 | 0.003854 | null | null | null | null | null | null | null | null |
feac0355ec1cb38be47800c74c215cd49880408f | 61069362 | null | Practical experience of machine translation | Post-editing is one of the most significant factors in the operation of a computer translation system. The economic validity of computer translation stands or falls on the efficiency and success of the post-editing process. The factors affecting the post-editing functions include the linguistic performance of the system, the quality of the source text, availability of terminology, capabilities of the personnel and mechanical aspect of the translation process. Good morning ladies and gentlemen. I would like to express my sincere appreciation on behalf of General Motors of Canada and myself for the honor of participating in what I believe will be a most informative and educational conference. The exchange of knowledge is a vital necessity in order to secure progress in our specialized field of machine translation. General Motors of Canada became actively involved in language translation using a computer in 1976 with the installation of a system called Systran. It was installed with the purpose of translating large volumes of technical literature such as the vehicle service manuals from English to French. Since then we have expanded the application to include railroad locomotives and highway transit coaches. We are currently staffed with three English-French bilingual translators, one English-French-Spanish trilingual translator and one English-French bilingual typist. In addition one data processing programmer is responsible for the ongoing maintenance of the system. Since the system has been installed, we have translated the following documents: 700 pages of the 1976 Chevrolet truck shop manual for the Canadian Department of National Defence 350 pages of a locomotive blower-type diesel engine manual 350 pages of a locomotive turbo-charged diesel engine manual 448 pages of a locomotive service manual 118 pages of a locomotive operator's manual 450 pages of a transit coach manual 100 pages of an Isuzu owner's manual. And on a continuing basis, we are translating the following items: | {
"name": [
"Sereda, Stanley P."
],
"affiliation": [
null
]
} | null | null | Translating and the Computer: Practical experience of machine translation | 1981-11-01 | 5 | 21 | null | null | null | null | the Product Service bulletins (averaging 72 pages per month) technicians training guides (averaging 50 pages per guide).Both the Electro-Motive diesel engine shop manuals and the Department of National Defence shop manual have been produced in a two-column side by side bilingual format.In discussing language translations using a computer, one very important point must be noted. No computer translation system, as you are aware, is perfect. Due to the intricacies in language rules the system cannot be expected to produce error free translation. Therefore, it must be understood that a computer translation system is in fact a computer assisted translation system where the human translator and not the computer plays the key role.The function of the translator in the computer environment differs from the manual environment in that the translator becomes a post-editor devoting his time and attention to refining the translation rather than spending a great deal of tedious time on manual translation of common words. Ideally, the translator's function should be to proof read the computer translated text and make few necessary refinements. This will happen only through vast improvements in the computer translation technology.With today's technology, the post-editing function is the most time consuming and costly segment of the computer translation process. In order to make the system economically viable, the post-editing function must be efficient to realize sufficient savings over the manual process to cover the extra cost of computer resource usage.On any given day, an experienced translator working on technical material, can manually produce final copy at a rate varying between 800 and 1,500 words per day depending on the difficulty of the text. A computer will process the same work in less than two minutes. Currently, our translators are able to post-edit at a rate of 3 to 4 times faster than manual translation. We believe this ratio can be increased further with linguistic enhancements.The factors that determine the effectiveness of computer translation and specifically the post-editing function can be classified as follows:-the linguistic performance of the system -the source language text to be translated -the availability of terminology -the translators who carry out post-editing and -the mechanical aspect of the system.It is obvious that the linguistic performance of the computer translation system is a vital factor in determining the efficiency of post-editing. If the analysis and synthesis systems function incorrectly, then the target language text will be difficult to edit. Experience has shown that a simple word-for-word translation is impractical to post-edit; that is, the cost of machine translation is greater than the cost of manual translation. The translation system must carry out a certain level of "intelligent" analysis of the source language and selective synthesis of the target language.It should be noted that different errors in translation have a different effect on the post-editing function. Minor errors involving articles and verb/adverb re-arrangement can be resolved quickly and easily by the translator. On the other hand, certain kinds of structural errors can be extremely difficult to correct, perhaps requiring the complete rewrite of the affected sentence.The linguistic performance of the system is the main factor that affects postediting. Unfortunately, it is the only area in the translation process where the user of the system has very little control in implementing necessary improvements. User can only identify the problem areas that require correction and then wait for the errors in the system to be corrected.Inconsistent linguistic analysis, unless corrected on a timely basis, can cause a great deal of frustration on the part of the translator. Of course, we realize that it is impossible to correct all linguistic problems. However, we hope that further enhancements to the system will minimize errors thus reducing the postediting requirement.The source text affects the degree of post-editing required in the translation. Text containing grammatical errors or that bends the rule of the source language will produce unpredictable translation. For instance, some of the problems encountered in technical publications are incomplete sentences, ambiguous text due to lack of articles and punctuation and the use of abbreviations. Well written, well punctuated and unambiguous source text results in the translation that will require minimal post-editing. The use of consistent terminology and sentence structure will also lessen the need for post-editing.To achieve acceptable level of translation, it may be necessary to evaluate and pre-edit the source text in a way that recognizes the limitations of machine translation. The need for pre-edit can be further reduced through controlled writing by applying certain text preparation guidelines on terminology usage and sentence structure.It is possible to lessen the overall translation requirements, especially with technical manuals, by substituting illustrations in place of the written text while the source materials are being prepared.It is apparent that even the most advanced computer translation system will be useless without the availability of sufficient terminology in the source and target languages. Our entire vocabulary is contained in two dictionaries, stem dictionary which contains single words and a dictionary containing multi-word expressions. Currently these dictionaries contain 52,500 stem words and 78,000 expressions which are constantly being updated. Most of the words and expressions in the dictionary are technical terms. The vocabulary is being expanded to include words and expressions pertaining to other subject fields of our business. We are currently working to expand the English-French vocabulary to include Spanish. The English-Spanish dictionary currently contains 14,200 stem words and 5,350 expressions.During the early stages of our activity we undertook an extensive dictionary coding procedure. We would translate a document, then code all unfound words and expressions for updating the dictionaries. As the vocabulary increases, the volume of the dictionary update decreases. However, the time and effort required for terminology research do not decrease proportionally with the volume. Finding a correct equivalent for technical terms is not a simple dictionary look-up operation. It is a difficult intellectual process involving knowledge of the technical field and the practices of the "source" and "target" languages.Clearly, improvements to the dictionary cost time and money, and should be offset by improvements in the post-editing performance. Thus, items which occur very infrequently in real text should not be coded as it is unlikely that the benefit will match the cost of the coding operation. Another point to consider in dictionary coding is that due to the nature of linguistic analysis, a dictionary change made for a specific text can have a negative effect on the translation of other texts.One factor in the terminology research problem is perhaps peculiar to North America. Because of the cultural and technological dominance of the English language, the English terminology especially in technical fields is pervasive even in the francophone Province of Quebec. This situation in Quebec is expected to change through the program of francization undertaken by the Provincial Government of Quebec.The process of post-editing involves two main tasks; to identify errors in the translation and to find solutions for the errors. To carry out these tasks, the translator must be completely fluent in the target language as well as the source language. Furthermore, the individual should have a good knowledge of the subject matter involved in the translation to be an effective post-editor. An individual fluent in the target language will be able to recognize linguistic errors with little difficulty. But the translator will have difficulty detecting meaning errors unless he can read and understand the source text. In technical text, the understanding of the subject matter will further enhance his ability to identify factual errors.Once an error has been identified (whether a linguistic error or a meaning error), a correct form must be found. This process also requires native knowledge of the target language and specific technical skills in the appropriate field. It should be noted however that the translator must be controlled to some extent -particularly in terms of using acceptable standardized terminology and in terms of not wasting excessive amounts of time on purely stylistic changes.It is possible that this phase involves the translator in consultation with other technical specialists or reference material.The availability of the computer resources for translation activity offers the opportunity to take advantage of these resources to facilitate the post-editing function.At General Motors, text processing and word processing systems are used to pre-edit the source text and to post-edit the target text. User friendly procedures have been developed to allow the translators to initiate computer translation by specifying options and parameters via video display terminals. The translators are provided with a document which lists the source and target texts in a side by side format to post-edit the translation.To simplify terminology look-up, we have developed an English-French on-line dictionary system. This system provides the instantaneous translation of words or expressions contained in our dictionaries on a video display screen, in their basic forms, eliminating the need for hard copy dictionary listings.We have also developed a terminal entry dictionary coding system to assist translators in updating the dictionaries. This codingsystem eliminates the need for hard copy coding sheets and at the sametime allowsus to control job submissions in a more efficient manner.We have developed pre-processor and post-processor programs to simplify the production of the target language document. Any data such as photocomposition codes and text processing codes if left in the text may cause erroneous translation. The pre-processor program flags these codes as "do not translate" thus eliminating these codes from the translation process. The post-processor reintroduces these codes back into the target language text thus eliminating the need for re-keyboarding them.As you can gather, we have endeavoured to mechanize and facilitate the postediting process to maximize the translation productivity. However, our experience shows the need for continuing improvement in the linguistic performance of the | null | Main paper:
s.p. sereda:
the Product Service bulletins (averaging 72 pages per month) technicians training guides (averaging 50 pages per guide).Both the Electro-Motive diesel engine shop manuals and the Department of National Defence shop manual have been produced in a two-column side by side bilingual format.In discussing language translations using a computer, one very important point must be noted. No computer translation system, as you are aware, is perfect. Due to the intricacies in language rules the system cannot be expected to produce error free translation. Therefore, it must be understood that a computer translation system is in fact a computer assisted translation system where the human translator and not the computer plays the key role.The function of the translator in the computer environment differs from the manual environment in that the translator becomes a post-editor devoting his time and attention to refining the translation rather than spending a great deal of tedious time on manual translation of common words. Ideally, the translator's function should be to proof read the computer translated text and make few necessary refinements. This will happen only through vast improvements in the computer translation technology.With today's technology, the post-editing function is the most time consuming and costly segment of the computer translation process. In order to make the system economically viable, the post-editing function must be efficient to realize sufficient savings over the manual process to cover the extra cost of computer resource usage.On any given day, an experienced translator working on technical material, can manually produce final copy at a rate varying between 800 and 1,500 words per day depending on the difficulty of the text. A computer will process the same work in less than two minutes. Currently, our translators are able to post-edit at a rate of 3 to 4 times faster than manual translation. We believe this ratio can be increased further with linguistic enhancements.The factors that determine the effectiveness of computer translation and specifically the post-editing function can be classified as follows:-the linguistic performance of the system -the source language text to be translated -the availability of terminology -the translators who carry out post-editing and -the mechanical aspect of the system.It is obvious that the linguistic performance of the computer translation system is a vital factor in determining the efficiency of post-editing. If the analysis and synthesis systems function incorrectly, then the target language text will be difficult to edit. Experience has shown that a simple word-for-word translation is impractical to post-edit; that is, the cost of machine translation is greater than the cost of manual translation. The translation system must carry out a certain level of "intelligent" analysis of the source language and selective synthesis of the target language.It should be noted that different errors in translation have a different effect on the post-editing function. Minor errors involving articles and verb/adverb re-arrangement can be resolved quickly and easily by the translator. On the other hand, certain kinds of structural errors can be extremely difficult to correct, perhaps requiring the complete rewrite of the affected sentence.The linguistic performance of the system is the main factor that affects postediting. Unfortunately, it is the only area in the translation process where the user of the system has very little control in implementing necessary improvements. User can only identify the problem areas that require correction and then wait for the errors in the system to be corrected.Inconsistent linguistic analysis, unless corrected on a timely basis, can cause a great deal of frustration on the part of the translator. Of course, we realize that it is impossible to correct all linguistic problems. However, we hope that further enhancements to the system will minimize errors thus reducing the postediting requirement.The source text affects the degree of post-editing required in the translation. Text containing grammatical errors or that bends the rule of the source language will produce unpredictable translation. For instance, some of the problems encountered in technical publications are incomplete sentences, ambiguous text due to lack of articles and punctuation and the use of abbreviations. Well written, well punctuated and unambiguous source text results in the translation that will require minimal post-editing. The use of consistent terminology and sentence structure will also lessen the need for post-editing.To achieve acceptable level of translation, it may be necessary to evaluate and pre-edit the source text in a way that recognizes the limitations of machine translation. The need for pre-edit can be further reduced through controlled writing by applying certain text preparation guidelines on terminology usage and sentence structure.It is possible to lessen the overall translation requirements, especially with technical manuals, by substituting illustrations in place of the written text while the source materials are being prepared.It is apparent that even the most advanced computer translation system will be useless without the availability of sufficient terminology in the source and target languages. Our entire vocabulary is contained in two dictionaries, stem dictionary which contains single words and a dictionary containing multi-word expressions. Currently these dictionaries contain 52,500 stem words and 78,000 expressions which are constantly being updated. Most of the words and expressions in the dictionary are technical terms. The vocabulary is being expanded to include words and expressions pertaining to other subject fields of our business. We are currently working to expand the English-French vocabulary to include Spanish. The English-Spanish dictionary currently contains 14,200 stem words and 5,350 expressions.During the early stages of our activity we undertook an extensive dictionary coding procedure. We would translate a document, then code all unfound words and expressions for updating the dictionaries. As the vocabulary increases, the volume of the dictionary update decreases. However, the time and effort required for terminology research do not decrease proportionally with the volume. Finding a correct equivalent for technical terms is not a simple dictionary look-up operation. It is a difficult intellectual process involving knowledge of the technical field and the practices of the "source" and "target" languages.Clearly, improvements to the dictionary cost time and money, and should be offset by improvements in the post-editing performance. Thus, items which occur very infrequently in real text should not be coded as it is unlikely that the benefit will match the cost of the coding operation. Another point to consider in dictionary coding is that due to the nature of linguistic analysis, a dictionary change made for a specific text can have a negative effect on the translation of other texts.One factor in the terminology research problem is perhaps peculiar to North America. Because of the cultural and technological dominance of the English language, the English terminology especially in technical fields is pervasive even in the francophone Province of Quebec. This situation in Quebec is expected to change through the program of francization undertaken by the Provincial Government of Quebec.The process of post-editing involves two main tasks; to identify errors in the translation and to find solutions for the errors. To carry out these tasks, the translator must be completely fluent in the target language as well as the source language. Furthermore, the individual should have a good knowledge of the subject matter involved in the translation to be an effective post-editor. An individual fluent in the target language will be able to recognize linguistic errors with little difficulty. But the translator will have difficulty detecting meaning errors unless he can read and understand the source text. In technical text, the understanding of the subject matter will further enhance his ability to identify factual errors.Once an error has been identified (whether a linguistic error or a meaning error), a correct form must be found. This process also requires native knowledge of the target language and specific technical skills in the appropriate field. It should be noted however that the translator must be controlled to some extent -particularly in terms of using acceptable standardized terminology and in terms of not wasting excessive amounts of time on purely stylistic changes.It is possible that this phase involves the translator in consultation with other technical specialists or reference material.The availability of the computer resources for translation activity offers the opportunity to take advantage of these resources to facilitate the post-editing function.At General Motors, text processing and word processing systems are used to pre-edit the source text and to post-edit the target text. User friendly procedures have been developed to allow the translators to initiate computer translation by specifying options and parameters via video display terminals. The translators are provided with a document which lists the source and target texts in a side by side format to post-edit the translation.To simplify terminology look-up, we have developed an English-French on-line dictionary system. This system provides the instantaneous translation of words or expressions contained in our dictionaries on a video display screen, in their basic forms, eliminating the need for hard copy dictionary listings.We have also developed a terminal entry dictionary coding system to assist translators in updating the dictionaries. This codingsystem eliminates the need for hard copy coding sheets and at the sametime allowsus to control job submissions in a more efficient manner.We have developed pre-processor and post-processor programs to simplify the production of the target language document. Any data such as photocomposition codes and text processing codes if left in the text may cause erroneous translation. The pre-processor program flags these codes as "do not translate" thus eliminating these codes from the translation process. The post-processor reintroduces these codes back into the target language text thus eliminating the need for re-keyboarding them.As you can gather, we have endeavoured to mechanize and facilitate the postediting process to maximize the translation productivity. However, our experience shows the need for continuing improvement in the linguistic performance of the
Appendix:
| null | null | null | null | {
"paperhash": [],
"title": [],
"abstract": [],
"authors": [],
"arxiv_id": [],
"s2_corpus_id": [],
"intents": [],
"isInfluential": []
} | null | 519 | 0.040462 | null | null | null | null | null | null | null | null |
a5e6eced4dec1850c181a199ec8b62b7386a3fcd | 236999913 | null | The {MT} errors which cause most trouble to posteditors | Errors can be categorized according to the amount of trouble caused. Simple errors can be classified objectively but complex errors involve more subjective judgments. This raises more general questions, i.e. standards of accuracy, intelligibility and style, the economics of MT and the involvement of the posteditor in improving the MT system. MT is most successful with repetitive texts. | {
"name": [
"Green, Roy"
],
"affiliation": [
null
]
} | null | null | Translating and the Computer: Practical experience of machine translation | 1981-11-01 | 0 | 6 | null | All errors cause trouble. This much we can say even before we begin to define our terms. A glance at any postedited text reveals that a fair amount of red ink has had to flow to bring the text up to accepted standards of human translation.In the context of postediting an error may be defined as 'any feature of the translation which causes the posteditor to put pen to paper'. Such a definition covers a multitude of sins, of both commission and omission.Various attempts have been made to classify and quantify errors in MT texts and this process must form the basis of any useful examination of errors. For the purposes of this presentation I propose three categories, based on the amount of trouble which the error causes to the posteditor.This category includes the misuse or omission of the definite article, wrong preposition, wrong personal pronoun, or the wrong choice of translationusually of a noun -when alternatives are possible (dossier = file/backrest). These are blatant errors, easy to identify and easy to postedit, particularly since only one or two words have to be deleted or supplied. These errors do not cause much trouble.This covers more substantial and complex errors. Examples are -a word-for-word translation of idiomatic expressions. A sentence beginning "L'année 1980 a vu se dérouler..." tends to fare rather badly.-errors which arise when the computer identifies a part of speech incorrectly.'Entre' translated as 'between' instead of 'enters', 'nous avions' as 'we aeroplanes'. These errors often manage to contaminate the rest of the sentence, with disastrous results.-the inability to change active verbs into the English passive can also lead to chaos.However, these are just more complex versions of Category I errors. Again it is patently obvious that something has gone wrong, and there is no problem inidentifying the words which have to be deleted and replaced. Often the quickest remedy is to correct the whole clause or sentence and write in one's own translation from scratch. In other words, the same technique as for Category I errors, but on a larger scale -delete and replace with correct translation.This category includes what might be termed 'doubtful translations and near misses'. On better days, or when feedback has had the desired impact, the computer sometimes provides reasonably intelligible phrases, clauses and even whole sentences. Paradoxically, this is precisely what causes most trouble. This is chiefly because at this stage the decisions which the posteditor must make become more subjective than for the first two categories. First of all he must make a yes/no decision, i.e. whether or not to alter the text. Then he must decide how far to go with his improvements. Should he 'patch up', salvaging as much as possible, should he cross it all out and substitute his own elegant translation, or should he choose one of several possible middle courses? Even if the translation had been produced by a human translator these decisions would be subjective. When MT is involved a further factor comes into play to affect one's judgment.This factor is the posteditor's general attitude towards MT. A posteditor who is generally sympathetic towards MT will tend to make a minimum of alterations. He wants MT to be successful, and so he may be led to accept a lower standard of translation, particularly when any alterations concern style rather than accuracy.On the other hand a posteditor who is generally unsympathetic towards MT will tend to find his worst suspicions confirmed at every turn, and will end up by condemning all MT out of hand and rewriting whole pages from scratch.This brings us to the question of what is acceptable. Three major criteria for assessing MT, or indeed any translation, are accuracy, intelligibility and style. The ideal is a high standard on all three counts, and my presentation is largely based on the assumption that this is the ideal we are aiming at. However, we are forced to admit that in the real world the priorities may be rather different.In any situation, accuracy should be the most important consideration. This is precisely where the computer should excel. Indeed, if it were the only criterion, the task of the posteditor would be reduced to that of correcting the blatant errors such as wrong alternative translations of nouns or wrong identification of part of speech. Intelligibility is not such a strong point of MT, but for end-users who are familiar with the subject matter complete intelligibility may not be essential.This leaves us with style. Style is highly prized in translating circles. It is not appreciated nearly so much in technical and commercial circles where the priorities tend to be speed and reasonable accuracy in many cases, rather than elegance and perfection.Many customers for translations will happily accept stylistic horrors if this cuts down the time they have to wait for a translation. Under these circumstances a good deal of time and money can be saved on 'stylistic postediting', and the most troublesome errors -i.e. of style -can simply be disregarded.However, not all posteditors are prepared to sell their souls by letting through translations which they consider to be unsatisfactory. This is a very important psychological aspect of postediting, which I now propose to consider.I have suggested one possible basis for classifying errors according to the amount of trouble caused to the posteditor. Trouble in this context may be defined as the amount of physical and mental effort required to correct errors. Although the aim was to assess this objectively, we saw that subjective judgments creep in. On top of this we must also consider the purely subjective question of the posteditor's emotional reaction to errors, which concerns a different but no less valid definition of 'trouble'. There is a 'coefficient of annoyance'. This cannot be quantified in accordance with any formula, as it varies from one individual to another, but one can criticize certain features of MT which will cause annoyance, in varying degrees, to most posteditors:The computer does not contain everyday words and expressions. It produces the wrong alternative translation. It does not produce different alternative translations as required.It does not change active infinitives in French into passive infinitives in English.It does not change nouns into verbs, or at least gerunds, in English. It translates idioms and idiomatic phrases word for word.The annoyance caused by the individual failings is compounded by the realization that all these errors will be made each time the particular case occurs in the original. At the beginning of a long document this is a depressing thought. A further compounding factor is the apparent intractability of these problems at the present stage of MT development.This is a psychological problem and must be treated by psychological methods.The answer lies in the value of feedback. The posteditor must see his work not merely as the unending task of correcting one-off errors. It is an investment of time and trouble which will pay dividends in the future. This future must not lie too far ahead. It is important for the posteditor to see the results of his work fairly quickly.To achieve this the posteditor should ideally confine his efforts to texts which are repetitive in themselves and/or similar to each other in terms of subject matter and terminology. A representative batch of pages should be translated and postedited. Recurring errors should be identified, corrected where possible and fed into the computer before further translation work is done. The impact of this feedback will be apparent in the next batch of MT.This produces two psychological benefits. The posteditor will be gratified to see the results of his work in the text, and will be motivated to do another stint of postediting plus feedback. The uplift of seeing the computer get something right every time certainly outweighs the depression felt earlier when the computer was getting it wrong every time.Posteditors are people, not machines, and it is vital to minimize the amountof 'subjective' trouble caused by MT errors, so that the posteditor will more readily accept the amount of 'objective' trouble inherent in his task.In conclusion, I should like to make a plea for a rational attitude towards MT. Posteditors are people, but computers are not. To regard computers as animate beings which make mistakes, display ignorance of elementary facts, and throw a fit when faced with complex sentences, is unscientific and emotional.MT is a tool, or at best a set of mechanized tools. The human translator must realize that he is in charge. He must use MT, accept its present limitations, involve himself in it and thereby contribute to improving it.This is how to deal with the trouble caused by MT errors. | null | null | null | null | Main paper:
introduction:
All errors cause trouble. This much we can say even before we begin to define our terms. A glance at any postedited text reveals that a fair amount of red ink has had to flow to bring the text up to accepted standards of human translation.In the context of postediting an error may be defined as 'any feature of the translation which causes the posteditor to put pen to paper'. Such a definition covers a multitude of sins, of both commission and omission.Various attempts have been made to classify and quantify errors in MT texts and this process must form the basis of any useful examination of errors. For the purposes of this presentation I propose three categories, based on the amount of trouble which the error causes to the posteditor.This category includes the misuse or omission of the definite article, wrong preposition, wrong personal pronoun, or the wrong choice of translationusually of a noun -when alternatives are possible (dossier = file/backrest). These are blatant errors, easy to identify and easy to postedit, particularly since only one or two words have to be deleted or supplied. These errors do not cause much trouble.This covers more substantial and complex errors. Examples are -a word-for-word translation of idiomatic expressions. A sentence beginning "L'année 1980 a vu se dérouler..." tends to fare rather badly.-errors which arise when the computer identifies a part of speech incorrectly.'Entre' translated as 'between' instead of 'enters', 'nous avions' as 'we aeroplanes'. These errors often manage to contaminate the rest of the sentence, with disastrous results.-the inability to change active verbs into the English passive can also lead to chaos.However, these are just more complex versions of Category I errors. Again it is patently obvious that something has gone wrong, and there is no problem inidentifying the words which have to be deleted and replaced. Often the quickest remedy is to correct the whole clause or sentence and write in one's own translation from scratch. In other words, the same technique as for Category I errors, but on a larger scale -delete and replace with correct translation.This category includes what might be termed 'doubtful translations and near misses'. On better days, or when feedback has had the desired impact, the computer sometimes provides reasonably intelligible phrases, clauses and even whole sentences. Paradoxically, this is precisely what causes most trouble. This is chiefly because at this stage the decisions which the posteditor must make become more subjective than for the first two categories. First of all he must make a yes/no decision, i.e. whether or not to alter the text. Then he must decide how far to go with his improvements. Should he 'patch up', salvaging as much as possible, should he cross it all out and substitute his own elegant translation, or should he choose one of several possible middle courses? Even if the translation had been produced by a human translator these decisions would be subjective. When MT is involved a further factor comes into play to affect one's judgment.This factor is the posteditor's general attitude towards MT. A posteditor who is generally sympathetic towards MT will tend to make a minimum of alterations. He wants MT to be successful, and so he may be led to accept a lower standard of translation, particularly when any alterations concern style rather than accuracy.On the other hand a posteditor who is generally unsympathetic towards MT will tend to find his worst suspicions confirmed at every turn, and will end up by condemning all MT out of hand and rewriting whole pages from scratch.This brings us to the question of what is acceptable. Three major criteria for assessing MT, or indeed any translation, are accuracy, intelligibility and style. The ideal is a high standard on all three counts, and my presentation is largely based on the assumption that this is the ideal we are aiming at. However, we are forced to admit that in the real world the priorities may be rather different.In any situation, accuracy should be the most important consideration. This is precisely where the computer should excel. Indeed, if it were the only criterion, the task of the posteditor would be reduced to that of correcting the blatant errors such as wrong alternative translations of nouns or wrong identification of part of speech. Intelligibility is not such a strong point of MT, but for end-users who are familiar with the subject matter complete intelligibility may not be essential.This leaves us with style. Style is highly prized in translating circles. It is not appreciated nearly so much in technical and commercial circles where the priorities tend to be speed and reasonable accuracy in many cases, rather than elegance and perfection.Many customers for translations will happily accept stylistic horrors if this cuts down the time they have to wait for a translation. Under these circumstances a good deal of time and money can be saved on 'stylistic postediting', and the most troublesome errors -i.e. of style -can simply be disregarded.However, not all posteditors are prepared to sell their souls by letting through translations which they consider to be unsatisfactory. This is a very important psychological aspect of postediting, which I now propose to consider.I have suggested one possible basis for classifying errors according to the amount of trouble caused to the posteditor. Trouble in this context may be defined as the amount of physical and mental effort required to correct errors. Although the aim was to assess this objectively, we saw that subjective judgments creep in. On top of this we must also consider the purely subjective question of the posteditor's emotional reaction to errors, which concerns a different but no less valid definition of 'trouble'. There is a 'coefficient of annoyance'. This cannot be quantified in accordance with any formula, as it varies from one individual to another, but one can criticize certain features of MT which will cause annoyance, in varying degrees, to most posteditors:The computer does not contain everyday words and expressions. It produces the wrong alternative translation. It does not produce different alternative translations as required.It does not change active infinitives in French into passive infinitives in English.It does not change nouns into verbs, or at least gerunds, in English. It translates idioms and idiomatic phrases word for word.The annoyance caused by the individual failings is compounded by the realization that all these errors will be made each time the particular case occurs in the original. At the beginning of a long document this is a depressing thought. A further compounding factor is the apparent intractability of these problems at the present stage of MT development.This is a psychological problem and must be treated by psychological methods.The answer lies in the value of feedback. The posteditor must see his work not merely as the unending task of correcting one-off errors. It is an investment of time and trouble which will pay dividends in the future. This future must not lie too far ahead. It is important for the posteditor to see the results of his work fairly quickly.To achieve this the posteditor should ideally confine his efforts to texts which are repetitive in themselves and/or similar to each other in terms of subject matter and terminology. A representative batch of pages should be translated and postedited. Recurring errors should be identified, corrected where possible and fed into the computer before further translation work is done. The impact of this feedback will be apparent in the next batch of MT.This produces two psychological benefits. The posteditor will be gratified to see the results of his work in the text, and will be motivated to do another stint of postediting plus feedback. The uplift of seeing the computer get something right every time certainly outweighs the depression felt earlier when the computer was getting it wrong every time.Posteditors are people, not machines, and it is vital to minimize the amountof 'subjective' trouble caused by MT errors, so that the posteditor will more readily accept the amount of 'objective' trouble inherent in his task.In conclusion, I should like to make a plea for a rational attitude towards MT. Posteditors are people, but computers are not. To regard computers as animate beings which make mistakes, display ignorance of elementary facts, and throw a fit when faced with complex sentences, is unscientific and emotional.MT is a tool, or at best a set of mechanized tools. The human translator must realize that he is in charge. He must use MT, accept its present limitations, involve himself in it and thereby contribute to improving it.This is how to deal with the trouble caused by MT errors.
Appendix:
| null | null | null | null | {
"paperhash": [],
"title": [],
"abstract": [],
"authors": [],
"arxiv_id": [],
"s2_corpus_id": [],
"intents": [],
"isInfluential": []
} | null | 519 | 0.011561 | null | null | null | null | null | null | null | null |
8c89fe5657d63e3494e792ad0552efc43e66c681 | 48342881 | null | Psychological and ergonomic factors in machine translation | The downward trend in the cost of computer facilities will result in increasingly widespread use of sophisticated applications such as machine translation. Many current systems have serious human factors shortcomings, and close attention must be paid to the real requirements of the end user if the full potential of the new technology is to be realised. | {
"name": [
"Bevan, Nigel"
],
"affiliation": [
null
]
} | null | null | Translating and the Computer: Practical experience of machine translation | 1981-11-01 | 1 | 1 | null | In the past computer systems were primarily used by the programmers who designed them, and the facilities provided were those appropriate to their needs. Although these systems functioned efficiently, they were only suitable for use by computer professionals. As costs have decreased, computers have become widely used by people with no previous experience of them. Unfortunately many systems are still designed more for the convenience of the programmer than for the user. This problem will become more acute as further cost reductions lead to even more widespread use of computers in professional applications such as machine translation. This makes it particularly important to take account of the psychological and ergonomic requirements of the ordinary non-specialist user.Looking ahead on a 5 year time-scale, one can expect an order of magnitude reduction in the cost of computer power. Although it may still require a machine costing the equivalent of today's IBM 370 to use Eurotra, SYSTRAN will be running on the equivalent of today's word processor, and personal micro-computers will be used for word-processing and storing terminology data-banks. Hand-held translators, little more than toys at present, will be capable of storing dictionaries containing tens of thousands of terms. This is only indicative of the potential of the computer of tomorrow. With machines of this capability widely available, the main limitation on the scope of applications in fields such as machine translation will be the imagination of the implementers.In the past computer systems could only be successful if designed primarily for machine efficiency. The professional programmer was judged to have done a good job if his program was well coded, and ran efficiently without errors. Unfortunately this philosophy is often incompatible with systems designed for user efficiency. If you ask systems analysts how to make systems user-friendly, they will give you a confident answer, but if you ask 3 analysts, you will probably obtain 3 different answers. The ultimate criterionfor user friendly systems should be to incorporate what is easiest and most natural for the end user. This means that tasks that are simple should be simple to perform, with the machine adapted to the user, rather than the user to the machine.Although this is a principle with which few would disagree, in practice it is easy to find examples of unnecessarily complicated systems. An application in which ease of use is particularly important is word processing. Yet many word processing systems are designed so that changes to a document which appear to be simple, actually involve unduly complicated procedures. A few examples will illustrate the problems which can arise:After editing a document, a simple requirement is to state that editing is complete, and that the changes should be made permanent.On better systems, a simple function button will store the new version of your document, and automatically keep a backup copy of the previous version. However on other systems, the procedure recommended at the end of each editing session involves typing 3 long commands operating on different file names. This process is natural and necessary for the computer, but of no interest to the user. 2. A frequent task when post-editing is to reverse the order of a pair of words. On many word processing systems this was not anticipated, and there is no provision to incorporate this facility as a simple function.On more flexible systems this type of feature can be added as required.3. On one popular microcomputer, no information is given when there is a machine error, unless the user explicitly asks whether an error occurred.Since errors are not very frequent, few people make a point of checking for errors after every action. However, if errors are not detected when they do occur, whole documents can be lost.These shortcomings are examples of the low priority frequently given to discovering the real needs of the end user.Looking to the future, the ultimate target for the system designer should be a computer which sells itself.The salesman need only carry the machine into your office, plug it in, and announce that it is self-explanatory, and will solve all your needs. He will then leave you to try it. By the time he returns a week later you should have found it so easy to use that you have already implemented all your office procedures, and have no hesitation in buying it.There is a distinction between direct use of a computer on-line (as for example with the Weidner system), and off line use of machine output (as is normal with SYSTRAN). Using an on-line system is a highly interactive process which in many respects resembles interaction with another person. The user will inevitably project a personality onto the system, which is dependent on the nature of the interaction. A system which, for example, responds helpfully to user errors and incorporates appropriate encouragement, will be perceived as more friendly and will therefore be used more willingly.A cold, concise, and unforgiving machine, although functioning perfectly correctly, will be used with less enthusiasm.The design goal should be a machine which emulates as closely as possible the behaviour of a helpful human expert.It is equally important to decide how the user should communicate with the machine. With a VDU the most practical solutions are: multiple choice from a menu, prompting for responses, or simple commands.Menus are a popular solution and give an ideal introduction to a system. Unfortunately, both menus and prompts can become very tedious when a long chain of selections has to be used repetitively. This is particularly exasperating when there is a noticeable system delay after each choice. The best answer is to allow sophisticated users to type ahead a sequence of instructions in the form of a command. Simple tasks should be simple to do, but users with more experience will be prepared to learn more complicated procedures in order to save themselves time.Eyestrain and fatigue are well known complaints of regular terminal operators. The ergonomics of terminal usage have now been studied in great detail, and almost all the difficulties can be attributed to bad positioning of both the VDU and its operator. Although a well set up VDU should be easy to read, it has a low luminance compared to that of paper in a brightly lit office, and can also suffer from the distracting effects of reflections on the screen. It is therefore essential to place VDUs away from light sources, in a position where there are a minimum of reflections. Another cause of fatigue is bad posture, and the seating and keyboard should be placed so that the user's back is straight and forearms horizontal. Anyone concerned with the positioning of VDUs is recommended to read a book such as the VDT Manual [1] .The technology now exists to implement a totally electronic office. An analogy can be made with the choice already faced by many translators to type or dictate their translation. Although dictation requires a change in working habits, it is usually regarded as the most cost-effective use of a translator's valuable time.Switching to word processing represents a similar change. Although asking a translator to use a keyboard and screen might appear to be a step backwards, a well designed word processor can offer many advantages. Alterations to the text can be made very simply at any stage. Cut and paste within a document is trivial, and it is very easy to store and extract standard phrases and paragraphs held in a personal database.In the electronic office, documents for translation will be received in machine readable form, or alternatively typed directly into a word processor. A translator can then initiate a request for MT from his terminal, and have the results returned for post-editing. While editing the changes on the word processor, problems with terminology could be resolved by linking the terminal to a terminology data base. When post-editing is complete the translation can be read fluently without the distraction of complicated handwritten corrections. Any further refinements can be made immediately. The word processor can check the spelling of the final document, and automatically format it for printing.Post-editing is often carried out on batch print-outs from a computer. In this case the content of the print-out is probably predetermined, and the user has little if any control over what he receives. The translator with a print-out may feel like a clerk with a third hand office memo. In these circumstances errors by the computer can be particularly infuriating, and there is a natural tendency to blame the remote entity responsible. This is hardly surprising, since even inanimate objects can appear to acquire a malevolent personality. If you can have one of those mornings when "It's trying to rain, and the car doesn't want to start", it is hardly surprising that you may decide that "the computer is not very intelligent today". Systems you cannot influence are particularly frustrating. In the case of machine translation, the translator will take a less anthropomorphic and more constructive position if he can feed back his own suggestions into the development of the system.The motivation for introducing MT is normally cost effectiveness of the translation process. This implies that post-editing must be fast, and thus require the minimum of changes. Individual translators' views of what constitutes an acceptable translation will vary, and some will make far fewer changes than others. What constitutes a good post-editor? He needs the flexibility of mind to see how the machine's attempt at translation can, with the minimum of changes, be turned into something acceptable. To achieve this may require training in efficient techniques derived from the experience of the best post-editors.One fear is that too much post-editing will distort an individual's perception of the language. This risk could be minimised if translators alternated post-editing with conventional translation.If post-editing does have an effect it will probably be some time before this is evident.An analogy may be found with the pocket calculator. Has its widespread use expanded children's horizons, or destroyed their understanding of arithmetic? If the latter, do we conclude that we would prefer to be without the pocket calculator? Can one envisage circumstances where the same would be said of MT?The human factors problems of current systems are frequently underestimated. The advent of cheap computer technology offers many exciting possibilities, but its full potential can only be realised if the user's psychological and ergonomic needs are fully understood. It is the responsibility of potential computer users to insist that machines are used to remove the drudgery from life and expand our horizons, rather than become our masters. The machines must serve our needs, and not we theirs. Computers should be allowed to take over the routine work in life, leaving humans free to concentrate their attention on creative activities, and real personal relationships. | null | null | null | null | Main paper:
introduction:
In the past computer systems were primarily used by the programmers who designed them, and the facilities provided were those appropriate to their needs. Although these systems functioned efficiently, they were only suitable for use by computer professionals. As costs have decreased, computers have become widely used by people with no previous experience of them. Unfortunately many systems are still designed more for the convenience of the programmer than for the user. This problem will become more acute as further cost reductions lead to even more widespread use of computers in professional applications such as machine translation. This makes it particularly important to take account of the psychological and ergonomic requirements of the ordinary non-specialist user.Looking ahead on a 5 year time-scale, one can expect an order of magnitude reduction in the cost of computer power. Although it may still require a machine costing the equivalent of today's IBM 370 to use Eurotra, SYSTRAN will be running on the equivalent of today's word processor, and personal micro-computers will be used for word-processing and storing terminology data-banks. Hand-held translators, little more than toys at present, will be capable of storing dictionaries containing tens of thousands of terms. This is only indicative of the potential of the computer of tomorrow. With machines of this capability widely available, the main limitation on the scope of applications in fields such as machine translation will be the imagination of the implementers.In the past computer systems could only be successful if designed primarily for machine efficiency. The professional programmer was judged to have done a good job if his program was well coded, and ran efficiently without errors. Unfortunately this philosophy is often incompatible with systems designed for user efficiency. If you ask systems analysts how to make systems user-friendly, they will give you a confident answer, but if you ask 3 analysts, you will probably obtain 3 different answers. The ultimate criterionfor user friendly systems should be to incorporate what is easiest and most natural for the end user. This means that tasks that are simple should be simple to perform, with the machine adapted to the user, rather than the user to the machine.Although this is a principle with which few would disagree, in practice it is easy to find examples of unnecessarily complicated systems. An application in which ease of use is particularly important is word processing. Yet many word processing systems are designed so that changes to a document which appear to be simple, actually involve unduly complicated procedures. A few examples will illustrate the problems which can arise:After editing a document, a simple requirement is to state that editing is complete, and that the changes should be made permanent.On better systems, a simple function button will store the new version of your document, and automatically keep a backup copy of the previous version. However on other systems, the procedure recommended at the end of each editing session involves typing 3 long commands operating on different file names. This process is natural and necessary for the computer, but of no interest to the user. 2. A frequent task when post-editing is to reverse the order of a pair of words. On many word processing systems this was not anticipated, and there is no provision to incorporate this facility as a simple function.On more flexible systems this type of feature can be added as required.3. On one popular microcomputer, no information is given when there is a machine error, unless the user explicitly asks whether an error occurred.Since errors are not very frequent, few people make a point of checking for errors after every action. However, if errors are not detected when they do occur, whole documents can be lost.These shortcomings are examples of the low priority frequently given to discovering the real needs of the end user.Looking to the future, the ultimate target for the system designer should be a computer which sells itself.The salesman need only carry the machine into your office, plug it in, and announce that it is self-explanatory, and will solve all your needs. He will then leave you to try it. By the time he returns a week later you should have found it so easy to use that you have already implemented all your office procedures, and have no hesitation in buying it.There is a distinction between direct use of a computer on-line (as for example with the Weidner system), and off line use of machine output (as is normal with SYSTRAN). Using an on-line system is a highly interactive process which in many respects resembles interaction with another person. The user will inevitably project a personality onto the system, which is dependent on the nature of the interaction. A system which, for example, responds helpfully to user errors and incorporates appropriate encouragement, will be perceived as more friendly and will therefore be used more willingly.A cold, concise, and unforgiving machine, although functioning perfectly correctly, will be used with less enthusiasm.The design goal should be a machine which emulates as closely as possible the behaviour of a helpful human expert.It is equally important to decide how the user should communicate with the machine. With a VDU the most practical solutions are: multiple choice from a menu, prompting for responses, or simple commands.Menus are a popular solution and give an ideal introduction to a system. Unfortunately, both menus and prompts can become very tedious when a long chain of selections has to be used repetitively. This is particularly exasperating when there is a noticeable system delay after each choice. The best answer is to allow sophisticated users to type ahead a sequence of instructions in the form of a command. Simple tasks should be simple to do, but users with more experience will be prepared to learn more complicated procedures in order to save themselves time.Eyestrain and fatigue are well known complaints of regular terminal operators. The ergonomics of terminal usage have now been studied in great detail, and almost all the difficulties can be attributed to bad positioning of both the VDU and its operator. Although a well set up VDU should be easy to read, it has a low luminance compared to that of paper in a brightly lit office, and can also suffer from the distracting effects of reflections on the screen. It is therefore essential to place VDUs away from light sources, in a position where there are a minimum of reflections. Another cause of fatigue is bad posture, and the seating and keyboard should be placed so that the user's back is straight and forearms horizontal. Anyone concerned with the positioning of VDUs is recommended to read a book such as the VDT Manual [1] .The technology now exists to implement a totally electronic office. An analogy can be made with the choice already faced by many translators to type or dictate their translation. Although dictation requires a change in working habits, it is usually regarded as the most cost-effective use of a translator's valuable time.Switching to word processing represents a similar change. Although asking a translator to use a keyboard and screen might appear to be a step backwards, a well designed word processor can offer many advantages. Alterations to the text can be made very simply at any stage. Cut and paste within a document is trivial, and it is very easy to store and extract standard phrases and paragraphs held in a personal database.In the electronic office, documents for translation will be received in machine readable form, or alternatively typed directly into a word processor. A translator can then initiate a request for MT from his terminal, and have the results returned for post-editing. While editing the changes on the word processor, problems with terminology could be resolved by linking the terminal to a terminology data base. When post-editing is complete the translation can be read fluently without the distraction of complicated handwritten corrections. Any further refinements can be made immediately. The word processor can check the spelling of the final document, and automatically format it for printing.Post-editing is often carried out on batch print-outs from a computer. In this case the content of the print-out is probably predetermined, and the user has little if any control over what he receives. The translator with a print-out may feel like a clerk with a third hand office memo. In these circumstances errors by the computer can be particularly infuriating, and there is a natural tendency to blame the remote entity responsible. This is hardly surprising, since even inanimate objects can appear to acquire a malevolent personality. If you can have one of those mornings when "It's trying to rain, and the car doesn't want to start", it is hardly surprising that you may decide that "the computer is not very intelligent today". Systems you cannot influence are particularly frustrating. In the case of machine translation, the translator will take a less anthropomorphic and more constructive position if he can feed back his own suggestions into the development of the system.The motivation for introducing MT is normally cost effectiveness of the translation process. This implies that post-editing must be fast, and thus require the minimum of changes. Individual translators' views of what constitutes an acceptable translation will vary, and some will make far fewer changes than others. What constitutes a good post-editor? He needs the flexibility of mind to see how the machine's attempt at translation can, with the minimum of changes, be turned into something acceptable. To achieve this may require training in efficient techniques derived from the experience of the best post-editors.One fear is that too much post-editing will distort an individual's perception of the language. This risk could be minimised if translators alternated post-editing with conventional translation.If post-editing does have an effect it will probably be some time before this is evident.An analogy may be found with the pocket calculator. Has its widespread use expanded children's horizons, or destroyed their understanding of arithmetic? If the latter, do we conclude that we would prefer to be without the pocket calculator? Can one envisage circumstances where the same would be said of MT?The human factors problems of current systems are frequently underestimated. The advent of cheap computer technology offers many exciting possibilities, but its full potential can only be realised if the user's psychological and ergonomic needs are fully understood. It is the responsibility of potential computer users to insist that machines are used to remove the drudgery from life and expand our horizons, rather than become our masters. The machines must serve our needs, and not we theirs. Computers should be allowed to take over the routine work in life, leaving humans free to concentrate their attention on creative activities, and real personal relationships.
Appendix:
| null | null | null | null | {
"paperhash": [],
"title": [],
"abstract": [],
"authors": [],
"arxiv_id": [],
"s2_corpus_id": [],
"intents": [],
"isInfluential": []
} | null | 519 | 0.001927 | null | null | null | null | null | null | null | null |
effcc4862349e14b3afd027e94126eadcf378279 | 236999888 | null | Summary of discussion: Speculation; The Limits of Innovation | The final discussion period of the conference covered some of the topics raised by speakers in the last session and some general issues from the conference as a whole. | {
"name": [
"Hutchins, W. John"
],
"affiliation": [
null
]
} | null | null | Translating and the Computer: Practical experience of machine translation | 1981-11-01 | 0 | 0 | null | null | null | null | null | There were a number of comments relating to Margaret Masterman's remarks about the need for a model of human translation processes. It was a matter for speculation what similarities and differences there might be between language performance in general and translation activities, but it was suspected that processes involving pattern matching, assembling and restructuring were common elements. It was suggested that research on the psychology of interpreting might offer some insights as it is likely that interpreters speak and think in ways that resemble the methods and performances of translators. Another speaker pleaded for a neuropsychological model on the grounds that computational models tend to reinforce trends towards the subjugation of people to computer requirements (citing the 'multi-national customized English' of the Xerox Corporation as an instance). But there was not much enthusiasm for the models of neuropsychology and it was argued also that there can be no conclusive proof that individuals do or do not behave according to a particular model or like a particular computer program.It was remarked how little is known what post-editors actually do and what they contribute to the quality of finished translations. We do not know what makes a good post-editor or what makes a poor or unsatisfactory one; however, there was one suggestion that competence in the source language was important since postediting appeared to demand strength in both languages at a high level.Mrs King was asked to elaborate on certain aspects of the EUROTRA project: the structure of dependency-grammar representations, the flexibility and freedom available to programmers and linguists in the design of analysis and transformation procedures, and the role of interface structures to ensure the successful operation of separately developed programs within the whole system.The need for MT systems to accommodate the imaginative and 'creative' aspects of translation was a problem which Professor Knowles related to the difficult decisions about how much meaning should be codified as static information in MT dictionaries and how much should be computed by dynamic processes at the time of analysis of individual texts.Practical issues of MT economics and effectiveness were raised in the discussion. On the question of how much translation work an organization would need to be doing before MT is a viable proposition it was suggested as a rough guideline a minimum level of at least two million words a year, although much depends on the subject range, the computer system and the organization structure. As for the amount of pre-editing essential for satisfactory results, particularly if texts are not written by people in their mother tongue, there were no easy answers. The factors to be taken into account include not only the subjects of texts and the abilities of the writers but also the intended uses of translations, the competence and knowledge of probable readers, the quality standards of the organization and whether pre-editing is done by the authors or by others. At the very least, the correction of punctuation is desirable before input to a MT system. It was suggested that the type of text translated was a major factor in determining whether results are adequate and readers are satisfied; in general, it was claimed | Main paper:
1.:
There were a number of comments relating to Margaret Masterman's remarks about the need for a model of human translation processes. It was a matter for speculation what similarities and differences there might be between language performance in general and translation activities, but it was suspected that processes involving pattern matching, assembling and restructuring were common elements. It was suggested that research on the psychology of interpreting might offer some insights as it is likely that interpreters speak and think in ways that resemble the methods and performances of translators. Another speaker pleaded for a neuropsychological model on the grounds that computational models tend to reinforce trends towards the subjugation of people to computer requirements (citing the 'multi-national customized English' of the Xerox Corporation as an instance). But there was not much enthusiasm for the models of neuropsychology and it was argued also that there can be no conclusive proof that individuals do or do not behave according to a particular model or like a particular computer program.It was remarked how little is known what post-editors actually do and what they contribute to the quality of finished translations. We do not know what makes a good post-editor or what makes a poor or unsatisfactory one; however, there was one suggestion that competence in the source language was important since postediting appeared to demand strength in both languages at a high level.Mrs King was asked to elaborate on certain aspects of the EUROTRA project: the structure of dependency-grammar representations, the flexibility and freedom available to programmers and linguists in the design of analysis and transformation procedures, and the role of interface structures to ensure the successful operation of separately developed programs within the whole system.The need for MT systems to accommodate the imaginative and 'creative' aspects of translation was a problem which Professor Knowles related to the difficult decisions about how much meaning should be codified as static information in MT dictionaries and how much should be computed by dynamic processes at the time of analysis of individual texts.Practical issues of MT economics and effectiveness were raised in the discussion. On the question of how much translation work an organization would need to be doing before MT is a viable proposition it was suggested as a rough guideline a minimum level of at least two million words a year, although much depends on the subject range, the computer system and the organization structure. As for the amount of pre-editing essential for satisfactory results, particularly if texts are not written by people in their mother tongue, there were no easy answers. The factors to be taken into account include not only the subjects of texts and the abilities of the writers but also the intended uses of translations, the competence and knowledge of probable readers, the quality standards of the organization and whether pre-editing is done by the authors or by others. At the very least, the correction of punctuation is desirable before input to a MT system. It was suggested that the type of text translated was a major factor in determining whether results are adequate and readers are satisfied; in general, it was claimed
Appendix:
| null | null | null | null | {
"paperhash": [],
"title": [],
"abstract": [],
"authors": [],
"arxiv_id": [],
"s2_corpus_id": [],
"intents": [],
"isInfluential": []
} | null | 519 | 0 | null | null | null | null | null | null | null | null |
b354603a7a99bf00eb559c9fec1af9893947a242 | 62172415 | null | The importance of feedback from translators in the development of high-quality machine translation | Until fairly recently, those involved in the design and development of M.T. systems tended to be expert programmers and moderately competent linguists, often with a good working knowledge of several foreign languages, but seldom, if ever, with any first-hand experience of professional translation. Perhaps this explains why M.T. was introduced first and foremost in the area of information scanning where huge lexical data bases in combination with rather rudimentary translation programs provided usable results. However, now that a number of translators have begun to use M.T. as an aid in their day-to-day work, the feedback received from them is proving to be a vital source of information, not only in the correction of present shortcomings but in the further enhancement of systems at all levels. | {
"name": [
"Pigott, Ian M."
],
"affiliation": [
null
]
} | null | null | Translating and the Computer: Practical experience of machine translation | 1981-11-01 | 7 | 10 | null | At the beginning of this year (1981), translators at the Commission's translation division in Luxembourg began making use of Systran machine translations as an aid in their routine work. While many have reacted somewhat negatively to this new approach, the enthusiasm demonstrated by others seems to mark something of a turning point in the interplay between man and machine in this field.In my analysis today, I should therefore like to examine some of the possible reasons why translators have been so hesitant to turn to the computer for assistance in high-quality translation work despite the fact that M.T. systems have been in operation for a good many years. I shall also give a brief account of the types of feedback received from translators and the vital part it now plays in M.T. development at the Commission.It is not a primary aim of this conference to describe the workings or mechanics of the systems under consideration. Indeed, a considerable amount of literature has already been published on the subject. But for the purposes of this talk, let me just say that Systran as used by the Commission is a free-syntax batch-operated system covering many subject fields which produces raw machine translations without any human intervention apart from text input.If we trace back the history of M.T., we find that most of the basic design and development work in the fifties and sixties was carried out by expert computer programmers assisted by a new breed of linguists, soon to be known as computational linguists. However, seldom, if ever, were actual translators involved. This is perhaps not surprising for a number of reasons. Firstly, the hardware systems available at the time were extremely limited in capacity and performance, with the result that any program had to be carefully adapted to the constraints of the machine. Secondly, in the absence of any dependable high-level computer languages, the programming itself had to be done in machine language which was virtually incomprehensible to linguists and translators. Thirdly, translators themselves were very sceptical about M.T. in general and it was therefore up to programmers and computational linguists to prove that translation could in fact be handled by computers.There were, as we all know, many ups and downs in the early days. Large sums of money were invested but for the most part results were very disappointing. Indeed, the publication of the ALPAC report in 1966 with its recommendation that investment in M.T. development should be discontinued seemed to have finally put an end to further progress.Nevertheless a number of individuals remained convinced that machine translation was indeed feasible and one or two of them continued development work privately. In particular, by the late sixties Dr Peter Toma's original Russian-English Systran system was providing extremely useful output for the U.S. Air Force. Soon after, the Logos system started producing satisfactory translations from English into Vietnamese, and a version of the Georgetown system was used to a limited extent by the EC research centre in Ispra.But all these systems were used primarily for information gathering purposes. The aim was not so much to produce elegant translations for general distribution and publication as to provide experts with a rough-and-ready indication of the topics covered by documents they were unable to read in the original language.Translators who had the opportunity to examine these early results were almost invariably highly critical of the quality standards reached and some took great delight in compiling lists of particularly hilarious M.T. output. We have all been reminded time and time again of how the Russian saying "Out of sight, out of mind" allegedly produced "Invisible idiot" in English.Yet the users themselves, above all scientists and technicians working for the U.S. Air Force, reacted quite differently. They continued to put in more and more requests for raw Russian-English machine translations, covering an ever increasing number of subject fields. As a result, the Air Force extended its financing of the Systran Russian-English system, which as time went by was to serve as a prototype for developments covering other language pairs.The success of these earlier systems undoubtedly lay in the huge machine dictionaries containing hundreds of thousands of technical terms in dozens of subject fields. Yet as computer performance increased, the translation programs were significantly improved, producing an ever more intelligible standard of output.It was indeed this success which encouraged the EC Commission to acquire the Systran system in 1976 for translation from English into French and later from French into English and English into Italian. However, while initial tests seemed to indicate that M.T. could be used for information scanning purposes, as in providing raw translations of databases connected to Euronet, the Commission's primary objective, that of assisting in-house translators in their day-to-day work by providing M.T. printouts for human post-editing, proved to be a much more difficult task.For example, of those translators who were invited to participate in theinitial development of Systran, all but one left the project after the first two months, either on the grounds that they were unable to understand the technical workings of the system or, more often, because they simply did not believe the standard of output could ever be significantly improved. Then again, despite some rather positive statistics on gains in cost-efficiency documented in evaluations carried out on the Systran system, translators generally opposed the introduction of M.T., maintaining the output provided was simply of no use to them.Although these reactions were very disappointing at the time -and indeed threatened to jeopardize the entire future of the project -in retrospect they can be understood.Whereas information scientists had been content with intelligibility, the Commission's translators were far more concerned with the accuracy of a translation. Moreover, even in cases where a machine translation was accurate, in the sense that the meaning of the translation was the same as that of the original, translators had the impression that mistakes in terminology, syntax and, for example, capitalization outweighed any benefits.Some of these reactions may have been psychological. Seasoned translators could hardly be expected to relish the thought of having to correct errors from the computer which a 10 year-old child would never have made, nor could they be expected to give overwhelming support to a system which, to them at any rate, seemed to pose a real threat to their future.Yet on closer analysis, it appeared that many of the negative reactions received stemmed from the fact that the quality of the machine outputhowever intelligible -was simply not suitable for post-editing.A great deal of effort was therefore put into generally upgrading the quality of the systems under development by introducing more terminology, improving the performance of the analysis and synthesis programs and providing easily readable print-outs in upper and lower case. Yet in the absence of any real feedback from translators in the form of post-editing, those of us responsible for quality improvement could only guess at what the real priorities were.To help us identify these we entrusted Margaret Masterman of C.L.R.U. with a study on the future potential of Systran. One of the most important recommendations which came out of this study was that translators should be provided with full documentation on the system in natural language in order that they could play an active part in its improvement. This recommendation led to a further study by C.L.R.U. to examine the feasibility of automatically transcribing the Systran program from IBM macro into natural English.The "opening-up of the black box" resulting from this work (which is still in progress) proved to be a major step forward in encouraging translators to take an interest in the system and was paramount in overcoming one of the psychological barriers between the human translator and the machine.Another study which provided us with a much better idea of translators' requirements was that undertaken by Veronica Lawson on the applicability of Systran to the translation of patents. The strict discipline of working closely with a translator over a number of months led to a better understanding on our part of what was or what was not acceptable and indeed resulted in major improvements. The carefully annotated printouts we received from Mrs Lawson and her colleagues proved to be an excellent source of feedback, particularly as errors considered to be "serious" were highlighted by colour coding. Finally, the study showed that professional translators were indeed willing and able to make a real contribution to M.T. Moreover, if the system could be adapted to patent translation, there appeared to be no reason why it could not be adapted to the translation of various types of Commission texts.In 1979 we therefore began using Systran for translating a number of documents originating in our own department. Fortunately, we were able to find one or two "motivated" translators who were happy to post-edit the machine output and advise us on priorities for further quality improvement.By the beginning of 1981 we were thus in a position to introduce a Systran service for translations from English into French and Italian and from French into English in cases where documents were considered to be suitable for M.T. The aim here was to provide the translator with a raw machine translation on paper which he could either post-edit directly or use as a basis for dictating his own translation. In fact, most of the documents were post-edited and returned to us, often with critical comments.The large quantity of feedback received in this way has proved enormously helpful in adapting the system to the specific needs of translators and now forms the main basis for on-going development. In addition, we have arranged a number of meetings and seminars with the translators involved, designed to explain the workings and limitations of the system to them and to hear their ideas on its further development and use.Feedback reaches us in many forms. By far the most voluminous kind comes in the form of corrections (or post-edits) made on the raw machine print-out. As the translator is expected to upgrade quality to that normally produced by conventional means, these corrections include everything from punctuation and capitalization to terminology, idiom and style. The average sentence may carry up to four or five changes, many of them minor, but in some cases whole sentences or parts of sentences are retranslated.The most immediate and direct way in which we can make use of edited printouts is by making additions or alterations to the system's dictionaries. In particular, missing terminology is immediately coded up and introduced into the system at regular intervals. Many of the other errors can also be dealt with at the dictionary level but, of course, some are of a more general nature and are dependent on the translation programs themselves. Efforts are made to add these to the system where possible but great care has to be taken in defining sound linguistic logic in order to avoid unwanted side effects. Finally, there are a number of changes relating to style, paraphrasing and restructuring which are extremely variable from one translator to another and in any case would be very difficult to incorporate in the machine process.Another extremely useful form of feedback comes in the form of notes from the translator giving his overall assessment of the quality of the output, often with details of the most important missing terminology or with lists of repetitive errors of a general nature. Even the most negative comments are often extremely useful in pinpointing areas requiring further work, if only to render the task of post-editing less irritating for seasoned translators.However, perhaps the most useful form of feedback for defining development priorities results from discussions with individual translators or groups of translators. Once they become better acquainted with the mechanics of M.T., translators are often in a position to make very sensible suggestions as to where we should concentrate our efforts. For example, it has become clear that correct terminology is more important to them than perfect syntax or style whereas in the past we may have tended to overrate syntactic accuracy in the interests of intelligibility.Among the most irritating phenomena for the translator seem to be the translation of proper nouns and expressions (such as the names of companies), non-recognition of frequently occurring idioms, and errors of elementary style (including choice of articles and prepositions). While these do not necessarily increase post-editing time, they certainly discourage many translators from making use of M.T. and are therefore being eliminated wherever possible.Finally one of the more general lessons we have been taught during the course of our experience with translators is that the more M.T. output a translator handles, the more proficient he becomes in making the best use of this new tool. In some cases he manages to double his output within a few months as he begins to recognize typical M.T. errors and devise more efficient ways of correcting them. Working hand in hand with translators we are also beginning to gain a better idea of the types of document best suited to the process and of those subject fields on which we should concentrate our terminology work. CONCLUSIONS M.T. systems developed for purposes of information gathering are probably not ideally suited to serve as an aid to translators. Designers of new systems should bear this in mind.If machine translation is to be used for high-quality translation work, it is vital that feedback from translators be incorporated so as to increase the real aid offered by the system.The recent enthusiasm expressed by a number of Commission translators would indicate that M.T. will, from now on, become an ever more important aid in the human translation process. | null | null | null | null | Main paper:
introduction:
At the beginning of this year (1981), translators at the Commission's translation division in Luxembourg began making use of Systran machine translations as an aid in their routine work. While many have reacted somewhat negatively to this new approach, the enthusiasm demonstrated by others seems to mark something of a turning point in the interplay between man and machine in this field.In my analysis today, I should therefore like to examine some of the possible reasons why translators have been so hesitant to turn to the computer for assistance in high-quality translation work despite the fact that M.T. systems have been in operation for a good many years. I shall also give a brief account of the types of feedback received from translators and the vital part it now plays in M.T. development at the Commission.It is not a primary aim of this conference to describe the workings or mechanics of the systems under consideration. Indeed, a considerable amount of literature has already been published on the subject. But for the purposes of this talk, let me just say that Systran as used by the Commission is a free-syntax batch-operated system covering many subject fields which produces raw machine translations without any human intervention apart from text input.If we trace back the history of M.T., we find that most of the basic design and development work in the fifties and sixties was carried out by expert computer programmers assisted by a new breed of linguists, soon to be known as computational linguists. However, seldom, if ever, were actual translators involved. This is perhaps not surprising for a number of reasons. Firstly, the hardware systems available at the time were extremely limited in capacity and performance, with the result that any program had to be carefully adapted to the constraints of the machine. Secondly, in the absence of any dependable high-level computer languages, the programming itself had to be done in machine language which was virtually incomprehensible to linguists and translators. Thirdly, translators themselves were very sceptical about M.T. in general and it was therefore up to programmers and computational linguists to prove that translation could in fact be handled by computers.There were, as we all know, many ups and downs in the early days. Large sums of money were invested but for the most part results were very disappointing. Indeed, the publication of the ALPAC report in 1966 with its recommendation that investment in M.T. development should be discontinued seemed to have finally put an end to further progress.Nevertheless a number of individuals remained convinced that machine translation was indeed feasible and one or two of them continued development work privately. In particular, by the late sixties Dr Peter Toma's original Russian-English Systran system was providing extremely useful output for the U.S. Air Force. Soon after, the Logos system started producing satisfactory translations from English into Vietnamese, and a version of the Georgetown system was used to a limited extent by the EC research centre in Ispra.But all these systems were used primarily for information gathering purposes. The aim was not so much to produce elegant translations for general distribution and publication as to provide experts with a rough-and-ready indication of the topics covered by documents they were unable to read in the original language.Translators who had the opportunity to examine these early results were almost invariably highly critical of the quality standards reached and some took great delight in compiling lists of particularly hilarious M.T. output. We have all been reminded time and time again of how the Russian saying "Out of sight, out of mind" allegedly produced "Invisible idiot" in English.Yet the users themselves, above all scientists and technicians working for the U.S. Air Force, reacted quite differently. They continued to put in more and more requests for raw Russian-English machine translations, covering an ever increasing number of subject fields. As a result, the Air Force extended its financing of the Systran Russian-English system, which as time went by was to serve as a prototype for developments covering other language pairs.The success of these earlier systems undoubtedly lay in the huge machine dictionaries containing hundreds of thousands of technical terms in dozens of subject fields. Yet as computer performance increased, the translation programs were significantly improved, producing an ever more intelligible standard of output.It was indeed this success which encouraged the EC Commission to acquire the Systran system in 1976 for translation from English into French and later from French into English and English into Italian. However, while initial tests seemed to indicate that M.T. could be used for information scanning purposes, as in providing raw translations of databases connected to Euronet, the Commission's primary objective, that of assisting in-house translators in their day-to-day work by providing M.T. printouts for human post-editing, proved to be a much more difficult task.For example, of those translators who were invited to participate in theinitial development of Systran, all but one left the project after the first two months, either on the grounds that they were unable to understand the technical workings of the system or, more often, because they simply did not believe the standard of output could ever be significantly improved. Then again, despite some rather positive statistics on gains in cost-efficiency documented in evaluations carried out on the Systran system, translators generally opposed the introduction of M.T., maintaining the output provided was simply of no use to them.Although these reactions were very disappointing at the time -and indeed threatened to jeopardize the entire future of the project -in retrospect they can be understood.Whereas information scientists had been content with intelligibility, the Commission's translators were far more concerned with the accuracy of a translation. Moreover, even in cases where a machine translation was accurate, in the sense that the meaning of the translation was the same as that of the original, translators had the impression that mistakes in terminology, syntax and, for example, capitalization outweighed any benefits.Some of these reactions may have been psychological. Seasoned translators could hardly be expected to relish the thought of having to correct errors from the computer which a 10 year-old child would never have made, nor could they be expected to give overwhelming support to a system which, to them at any rate, seemed to pose a real threat to their future.Yet on closer analysis, it appeared that many of the negative reactions received stemmed from the fact that the quality of the machine outputhowever intelligible -was simply not suitable for post-editing.A great deal of effort was therefore put into generally upgrading the quality of the systems under development by introducing more terminology, improving the performance of the analysis and synthesis programs and providing easily readable print-outs in upper and lower case. Yet in the absence of any real feedback from translators in the form of post-editing, those of us responsible for quality improvement could only guess at what the real priorities were.To help us identify these we entrusted Margaret Masterman of C.L.R.U. with a study on the future potential of Systran. One of the most important recommendations which came out of this study was that translators should be provided with full documentation on the system in natural language in order that they could play an active part in its improvement. This recommendation led to a further study by C.L.R.U. to examine the feasibility of automatically transcribing the Systran program from IBM macro into natural English.The "opening-up of the black box" resulting from this work (which is still in progress) proved to be a major step forward in encouraging translators to take an interest in the system and was paramount in overcoming one of the psychological barriers between the human translator and the machine.Another study which provided us with a much better idea of translators' requirements was that undertaken by Veronica Lawson on the applicability of Systran to the translation of patents. The strict discipline of working closely with a translator over a number of months led to a better understanding on our part of what was or what was not acceptable and indeed resulted in major improvements. The carefully annotated printouts we received from Mrs Lawson and her colleagues proved to be an excellent source of feedback, particularly as errors considered to be "serious" were highlighted by colour coding. Finally, the study showed that professional translators were indeed willing and able to make a real contribution to M.T. Moreover, if the system could be adapted to patent translation, there appeared to be no reason why it could not be adapted to the translation of various types of Commission texts.In 1979 we therefore began using Systran for translating a number of documents originating in our own department. Fortunately, we were able to find one or two "motivated" translators who were happy to post-edit the machine output and advise us on priorities for further quality improvement.By the beginning of 1981 we were thus in a position to introduce a Systran service for translations from English into French and Italian and from French into English in cases where documents were considered to be suitable for M.T. The aim here was to provide the translator with a raw machine translation on paper which he could either post-edit directly or use as a basis for dictating his own translation. In fact, most of the documents were post-edited and returned to us, often with critical comments.The large quantity of feedback received in this way has proved enormously helpful in adapting the system to the specific needs of translators and now forms the main basis for on-going development. In addition, we have arranged a number of meetings and seminars with the translators involved, designed to explain the workings and limitations of the system to them and to hear their ideas on its further development and use.Feedback reaches us in many forms. By far the most voluminous kind comes in the form of corrections (or post-edits) made on the raw machine print-out. As the translator is expected to upgrade quality to that normally produced by conventional means, these corrections include everything from punctuation and capitalization to terminology, idiom and style. The average sentence may carry up to four or five changes, many of them minor, but in some cases whole sentences or parts of sentences are retranslated.The most immediate and direct way in which we can make use of edited printouts is by making additions or alterations to the system's dictionaries. In particular, missing terminology is immediately coded up and introduced into the system at regular intervals. Many of the other errors can also be dealt with at the dictionary level but, of course, some are of a more general nature and are dependent on the translation programs themselves. Efforts are made to add these to the system where possible but great care has to be taken in defining sound linguistic logic in order to avoid unwanted side effects. Finally, there are a number of changes relating to style, paraphrasing and restructuring which are extremely variable from one translator to another and in any case would be very difficult to incorporate in the machine process.Another extremely useful form of feedback comes in the form of notes from the translator giving his overall assessment of the quality of the output, often with details of the most important missing terminology or with lists of repetitive errors of a general nature. Even the most negative comments are often extremely useful in pinpointing areas requiring further work, if only to render the task of post-editing less irritating for seasoned translators.However, perhaps the most useful form of feedback for defining development priorities results from discussions with individual translators or groups of translators. Once they become better acquainted with the mechanics of M.T., translators are often in a position to make very sensible suggestions as to where we should concentrate our efforts. For example, it has become clear that correct terminology is more important to them than perfect syntax or style whereas in the past we may have tended to overrate syntactic accuracy in the interests of intelligibility.Among the most irritating phenomena for the translator seem to be the translation of proper nouns and expressions (such as the names of companies), non-recognition of frequently occurring idioms, and errors of elementary style (including choice of articles and prepositions). While these do not necessarily increase post-editing time, they certainly discourage many translators from making use of M.T. and are therefore being eliminated wherever possible.Finally one of the more general lessons we have been taught during the course of our experience with translators is that the more M.T. output a translator handles, the more proficient he becomes in making the best use of this new tool. In some cases he manages to double his output within a few months as he begins to recognize typical M.T. errors and devise more efficient ways of correcting them. Working hand in hand with translators we are also beginning to gain a better idea of the types of document best suited to the process and of those subject fields on which we should concentrate our terminology work. CONCLUSIONS M.T. systems developed for purposes of information gathering are probably not ideally suited to serve as an aid to translators. Designers of new systems should bear this in mind.If machine translation is to be used for high-quality translation work, it is vital that feedback from translators be incorporated so as to increase the real aid offered by the system.The recent enthusiasm expressed by a number of Commission translators would indicate that M.T. will, from now on, become an ever more important aid in the human translation process.
Appendix:
| null | null | null | null | {
"paperhash": [],
"title": [],
"abstract": [],
"authors": [],
"arxiv_id": [],
"s2_corpus_id": [],
"intents": [],
"isInfluential": []
} | null | 519 | 0.019268 | null | null | null | null | null | null | null | null |
d6f57afa9e0d219c710b3514aeec1fcf873315f6 | 10523951 | null | Building Non-Normative Systems - The Search for Robustness: An Overview | Many natural language understanding systems behave much like the proverbial high school english teacher who simply fails to understand any utterance which doesn't conform to that teacher's inviolable standard of english usage. But while the teacher merely pretends not to understand, our systems really don't. | {
"name": [
"Marcus, Mitchell P."
],
"affiliation": [
null
]
} | null | null | 20th Annual Meeting of the Association for Computational Linguistics | 1982-06-01 | 0 | 7 | null | null | null | null | Many natural language understanding systems behave much like the proverbial high school english teacher who simply fails to understand any utterance which doesn't conform to that teacher's inviolable standard of english usage. But while the teacher merely pretends not to understand, our systems really don't. understanding language which they, when asked, would consider to he non-standard in some way or other.Our programs, on the other hand, tend to be very rigid.They usually fail to degrade gracefully when their internal models of syntax, semantics or pragmatlcs are violated by user input. In essence, the models of linguistic wellformedness which these programs embody become normative; they prescribe quite rigidly what is considered standard linguistic usage and what isn't.to this problem include extending a system's linguistic coverage or intentionally excluding linguistic constraints that are occasionally violated by speakers. But neither of these approaches changes the fundamental situation -that when confronted with input which fails to conform to the system builder's expectations, however broad and however loose, the system will entirely reject the input. Furthermore, these techniques bar a system from utilizing the fact that people normally do obey certain linguistic standards, even if they violate them on occasion.More recently, a range of approaches have been investigated that allow a system to behave more robustly when confronted with input which violates its designer's expectations about standard english usage. Most of this work has been within the realm of syntax. These techniques allow grammars to he descriptive without being normative. This panel focuses on these techniques for building what might be termed non-normative systems. Panelists were asked to consider the following range of issues:Are there different kinds of non-standard usage? Candidates for a subclasslficatlon of nonstandard usage might include the telegraphic language of massages and newspaper headlines; the informal colloquial use of language, even by speakers of the standard dialect; non-standard dialects; plain out-and-out grammatical errors; and the specialized sublanguage used by experts in a given domain. To what extent do these various forms have different properties, and are there independently characterizable dimensions along which they differ? What kinds of generalizations can be expressed about each of them individually or about non-standard usage in general?What are the techiques for dealing with nonstandard input robustly? A range of techniques have been discussed in the literature which can be invoked when a system is faced with input which is outside the subset of the language that its grammar describes.These include~ (a) the use of special "un-grammatlcal" rules, which explicitly encode facts about non-standard usage; (b) the use of "meta-rules" to relax the constraints imposed by classes of rules of the grammar; (c) allowing flexible interaction between syntax and semantics, so that semantics can directly analyze substrlngs of syntactic fragments or individual words when full syntactic analysis fails. How well do these techniques, and others, work with respect to the dimensions of non-standard input discussed above? What are the relative strengths and weaknesses of each of these techniques?To what extent are each of these techniques useful if one's goal is not to build a system which understands input, even if non-standard; but rather to build an explicitly normative system which can either (i) pinpoint ' grammatical errors, or (2) correct errors after pinpointing them? (Ironically, a system can be normative in a useful way only if it can understand what the user meant to say.)Are there more general approaches to building systems that degrade gracefully that can be applied to this set of problems?And finally, what the near-and long-term prospects for application ~f' ~lese techniques to practical working systems? | null | Main paper:
:
Many natural language understanding systems behave much like the proverbial high school english teacher who simply fails to understand any utterance which doesn't conform to that teacher's inviolable standard of english usage. But while the teacher merely pretends not to understand, our systems really don't. understanding language which they, when asked, would consider to he non-standard in some way or other.Our programs, on the other hand, tend to be very rigid.They usually fail to degrade gracefully when their internal models of syntax, semantics or pragmatlcs are violated by user input. In essence, the models of linguistic wellformedness which these programs embody become normative; they prescribe quite rigidly what is considered standard linguistic usage and what isn't.to this problem include extending a system's linguistic coverage or intentionally excluding linguistic constraints that are occasionally violated by speakers. But neither of these approaches changes the fundamental situation -that when confronted with input which fails to conform to the system builder's expectations, however broad and however loose, the system will entirely reject the input. Furthermore, these techniques bar a system from utilizing the fact that people normally do obey certain linguistic standards, even if they violate them on occasion.More recently, a range of approaches have been investigated that allow a system to behave more robustly when confronted with input which violates its designer's expectations about standard english usage. Most of this work has been within the realm of syntax. These techniques allow grammars to he descriptive without being normative. This panel focuses on these techniques for building what might be termed non-normative systems. Panelists were asked to consider the following range of issues:Are there different kinds of non-standard usage? Candidates for a subclasslficatlon of nonstandard usage might include the telegraphic language of massages and newspaper headlines; the informal colloquial use of language, even by speakers of the standard dialect; non-standard dialects; plain out-and-out grammatical errors; and the specialized sublanguage used by experts in a given domain. To what extent do these various forms have different properties, and are there independently characterizable dimensions along which they differ? What kinds of generalizations can be expressed about each of them individually or about non-standard usage in general?What are the techiques for dealing with nonstandard input robustly? A range of techniques have been discussed in the literature which can be invoked when a system is faced with input which is outside the subset of the language that its grammar describes.These include~ (a) the use of special "un-grammatlcal" rules, which explicitly encode facts about non-standard usage; (b) the use of "meta-rules" to relax the constraints imposed by classes of rules of the grammar; (c) allowing flexible interaction between syntax and semantics, so that semantics can directly analyze substrlngs of syntactic fragments or individual words when full syntactic analysis fails. How well do these techniques, and others, work with respect to the dimensions of non-standard input discussed above? What are the relative strengths and weaknesses of each of these techniques?To what extent are each of these techniques useful if one's goal is not to build a system which understands input, even if non-standard; but rather to build an explicitly normative system which can either (i) pinpoint ' grammatical errors, or (2) correct errors after pinpointing them? (Ironically, a system can be normative in a useful way only if it can understand what the user meant to say.)Are there more general approaches to building systems that degrade gracefully that can be applied to this set of problems?And finally, what the near-and long-term prospects for application ~f' ~lese techniques to practical working systems?
Appendix:
| null | null | null | null | {
"paperhash": [],
"title": [],
"abstract": [],
"authors": [],
"arxiv_id": [],
"s2_corpus_id": [],
"intents": [],
"isInfluential": []
} | null | 512 | 0.013672 | null | null | null | null | null | null | null | null |
80ae48ca2c395c3cde6d04e678a2adafc49a4ac9 | 2040861 | null | Themes From 1972 | Although 1972 was the year that Winograd published his now classic natural language Study of the blocks world, that fact had not yet penetrated to the ACL. At that time people with AI computational interests were strictly in a minority in the association and it was a radical move to appoint Roger Schank as program chairman for the year's meeting. That was also the year that we didn't have a presidential banquet, and my "speech" was a few informal remarks at the roadhouse restaurant somewhere in North Carolina reassuring a doubtful few members that computational understanding of natural language was certainly progressing and that applied natural language systems were distinctly feasible. | {
"name": [
"Simmons, Robert F."
],
"affiliation": [
null
]
} | null | null | 20th Annual Meeting of the Association for Computational Linguistics | 1982-06-01 | 3 | 3 | null | own perceptions of the state of computational linguistics during that period were given in "On Seeing the Elephant" in the Finite String, March-Aprll 1972. I saw it as a time of confusion, of competition among structuralists, transformationallsts, and the new breed of computernlks."On Seeing the Elephant" was a restatement of the old Sufi parable that suggested that we each perceived only isolated parts of our science.That was the period during which Jonathan Slocum and I were concerned with using Augmented Transition Networks to generate coherent English from semantic networks.That llne of research was originated by the first President of the Association, Victor Yngve, who in 1960 had published descriptions of algorithms for using a phrase structure grammar to generate syntactically well-formed nonsense sentences [Yngve 1960 ]. Sheldon Klein and I about 1962-1964 were fascinated by the technique and generalized it to a method for controlling the sense of what was generated by respecting the semantic dependencies of words as they occurred in text. Yngve's work was truly seminal and it continued to inspire Sheldon for years as he developed method after method for generating detective stories and now operas.I, too, with various students continued to explore the generation side of language, most recently with Correlra [1979] , using a form of story tree to construct stories and their summaries.No matter that Meehan found better methods and Bill Mann and his colleagues continue to improve on the techniques.The use of a phrase structure grammar to control the sequence in which sentences and words are p~oduced remains quite as fascinating as its use in translatln~ sentences to representations of meaning.of text in Just a few paragraphs, so in dedication to Yngve, Klein, and i00 the many others of the discipline who share our fascination with generation of meaningful language, the following description is presented.The last two lines of Keats" "Ode to a Grecian Urn" are:Beauty is truth, truth beauty, that is all Ye know on earth and all ye need to know. of The line rules are composed of terms such as "beauty", "that is all", etc., that begin SCLASS predications, and of terminals such as "is" and "--" that do not.Poem and verse can also be defined as rules:[POEM title verse verse ... verse] [TITLE (Variation on Keats" Truth is Beauty)] [VERSE klinel kllne2 kllne3]Actually it is more convenient to define these latter three elements as program to control choice of grammar, spacing, and number of verses. In either case, a POEM is a TITLE followed by VERSEs, an~ ~ VERSE is three lines each composed of terminals that occur in a KLINE or of selections from the matching substitution class.Only one other program element is required: a random selection function to pseudo-randomly choose an element from a substitution class and to record that element as chosen:((CHOOSE ( FIRST. REMDR) CHOICE) < (CHOSEN FIRS~ CHOICE)) ((CHOOSE ( FYRST. REMDR) CHOICE) < (RANDOM* ( FIRST. REMI~R) CHOICE) (ASSERT (CHOSEN jHOICE))~ Note: CHOOSE is called with the content of an SCLASS rule in the list (FIRST.REMDR); if a choice for the term has previously been made in the verse, CHOICE is taken from the predicate, (CHOSEN FIRST CHOICE).If not, RANDOM* selects an element and records it as CHOSEN. When a verse is begun, any existing CHOSEN predicates are deleted. This is a procedural logic program with lists in dot notation and variables marked using the underscore.It is presented to give a sense of how the program appears in Dan Chester's LISP version of PROLOG.The rest of the program follows the poem, verse, and Keats-LINE rules given above.The program is called by (POGEN KEATS 4) , KEATS selecting the grammar and 4 signifying the number of verses. A couple of recordings of its behavior appear below. 1972 the computational linguistics world has changed much.Today AI and Logic interests tend to overshadow linguistic approaches to language. But despite all the complexities in translating between NL constituents and computational representations, augmented phrase structure grammars provide a general and effective means to guide the flow of computation. | null | null | null | null | Main paper:
my:
own perceptions of the state of computational linguistics during that period were given in "On Seeing the Elephant" in the Finite String, March-Aprll 1972. I saw it as a time of confusion, of competition among structuralists, transformationallsts, and the new breed of computernlks."On Seeing the Elephant" was a restatement of the old Sufi parable that suggested that we each perceived only isolated parts of our science.That was the period during which Jonathan Slocum and I were concerned with using Augmented Transition Networks to generate coherent English from semantic networks.That llne of research was originated by the first President of the Association, Victor Yngve, who in 1960 had published descriptions of algorithms for using a phrase structure grammar to generate syntactically well-formed nonsense sentences [Yngve 1960 ]. Sheldon Klein and I about 1962-1964 were fascinated by the technique and generalized it to a method for controlling the sense of what was generated by respecting the semantic dependencies of words as they occurred in text. Yngve's work was truly seminal and it continued to inspire Sheldon for years as he developed method after method for generating detective stories and now operas.I, too, with various students continued to explore the generation side of language, most recently with Correlra [1979] , using a form of story tree to construct stories and their summaries.No matter that Meehan found better methods and Bill Mann and his colleagues continue to improve on the techniques.The use of a phrase structure grammar to control the sequence in which sentences and words are p~oduced remains quite as fascinating as its use in translatln~ sentences to representations of meaning.of text in Just a few paragraphs, so in dedication to Yngve, Klein, and i00 the many others of the discipline who share our fascination with generation of meaningful language, the following description is presented.The last two lines of Keats" "Ode to a Grecian Urn" are:Beauty is truth, truth beauty, that is all Ye know on earth and all ye need to know. of The line rules are composed of terms such as "beauty", "that is all", etc., that begin SCLASS predications, and of terminals such as "is" and "--" that do not.Poem and verse can also be defined as rules:[POEM title verse verse ... verse] [TITLE (Variation on Keats" Truth is Beauty)] [VERSE klinel kllne2 kllne3]Actually it is more convenient to define these latter three elements as program to control choice of grammar, spacing, and number of verses. In either case, a POEM is a TITLE followed by VERSEs, an~ ~ VERSE is three lines each composed of terminals that occur in a KLINE or of selections from the matching substitution class.Only one other program element is required: a random selection function to pseudo-randomly choose an element from a substitution class and to record that element as chosen:((CHOOSE ( FIRST. REMDR) CHOICE) < (CHOSEN FIRS~ CHOICE)) ((CHOOSE ( FYRST. REMDR) CHOICE) < (RANDOM* ( FIRST. REMI~R) CHOICE) (ASSERT (CHOSEN jHOICE))~ Note: CHOOSE is called with the content of an SCLASS rule in the list (FIRST.REMDR); if a choice for the term has previously been made in the verse, CHOICE is taken from the predicate, (CHOSEN FIRST CHOICE).If not, RANDOM* selects an element and records it as CHOSEN. When a verse is begun, any existing CHOSEN predicates are deleted. This is a procedural logic program with lists in dot notation and variables marked using the underscore.It is presented to give a sense of how the program appears in Dan Chester's LISP version of PROLOG.The rest of the program follows the poem, verse, and Keats-LINE rules given above.The program is called by (POGEN KEATS 4) , KEATS selecting the grammar and 4 signifying the number of verses. A couple of recordings of its behavior appear below. 1972 the computational linguistics world has changed much.Today AI and Logic interests tend to overshadow linguistic approaches to language. But despite all the complexities in translating between NL constituents and computational representations, augmented phrase structure grammars provide a general and effective means to guide the flow of computation.
Appendix:
| null | null | null | null | {
"paperhash": [
"simmons|relating_sentences_and_semantic_networks_with_procedural_logic",
"yngve|a_model_and_an_hypothesis_for_language_structure"
],
"title": [
"Relating sentences and semantic networks with procedural logic",
"A model and an hypothesis for language structure"
],
"abstract": [
"A system of symmetric clausal logic axioms is shown to transform a thirteen-sentence narrative about a v-2 rocket flight into semantic case relations. The same axioms translate the case relations into english sentences. An approach to defining schemas in clausal logic is presented and applied in the form of a mini-flight schema to two paragraphs of the text to compute a partitioning of the semantic network into the causal organization of a flight. Properties of rule symmetry and network condensibility are noted to be of importance for natural language processing. Because of the conciseness of the logic interpreter and the clausal representation for grammars and schemes, it is concluded that the procedural logic approach provides an effective programming system that is promising for accomplishing natural language computations on mini- and microcomputers as well as on large mainframes. 29 references.",
"Cover title. \"Reprint from Proceedings of the American Philosophical Society, vol.104, no.5.\""
],
"authors": [
{
"name": [
"Robert F. Simmons",
"Daniel L. Chester"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"V. Yngve"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
}
]
}
],
"arxiv_id": [
null,
null
],
"s2_corpus_id": [
"10437874",
"18889404"
],
"intents": [
[],
[]
],
"isInfluential": [
false,
false
]
} | null | 512 | 0.005859 | null | null | null | null | null | null | null | null |
120e6ca59edc5d15f5dbbcb9adef71c73ba1fae0 | 14372141 | null | Processing {E}nglish With a Generalized Phrase Structure Grammar | This paper describes a natural language processing system implemented at Hewlett-Packard's Computer Research Center. The system's main components are: a Generalized Phrase Structure Grammar (GPSG); a top-down parser; a logic transducer that outputs a first-order logical representation; and a "disambiguator" that uses sortal information to convert "normal-form" first-order logical expressions into the query language for HIRE, a relational database hosted in the SPHERE system. We argue that theoretical developments in GPSG syntax and in Montague semantics have specific advantages to bring to this domain of computational linguistics. The syntax and semantics of the system are totally domain-independent, and thus, in principle, highly portable. We discuss the prospects for extending domain-independence to the lexical semantics as well, and thus to the logical semantic representations. I. | {
"name": [
"Gawron, Jean Mark and",
"King, Jonathan and",
"Lamping, John and",
"Loebner, Egon and",
"Paulson, E. Anne and",
"Pullum, Geoffrey K. and",
"Sag, Ivan A. and",
"Wasow, Thomas"
],
"affiliation": [
null,
null,
null,
null,
null,
null,
null,
null
]
} | null | null | 20th Annual Meeting of the Association for Computational Linguistics | 1982-06-01 | 22 | 31 | null | This paper is an interim progress report on linguistic research carried out at Hewlett-Packard Laboratories since the summer of 1981.The research had three goals: (1) demonstrating the computational tractability of Generalized Phrase Structure Grammar (GPSG), (2) implementing a GPSG system covering a large fragment of English, and (3) establishing the feasibility of using GPSG for interactions with an inferencing knowledge base.Section 2 describes the general architecture of the system. Section 3 discusses the grammar and the lexicon.A brief dicussion of the parsing technique used in found in Section 4.Section 5 discusses the semantics of the system, and Section 6 presents ~ detailed example of a parse-tree complete with semantics.Some typical examples that the system can handle are given in the Appendix.The system is based on recent developments in syntax and semantics, reflecting a modular view in which grammatical structure an~ abstract logical structure have independent status. The understanding of a sentence occurs in a number of stages, distinct from each other and governed by different principles of organization. We are opposed to the idea that language understanding can be achieved without detailed syntactic analysis.There is, of course, a massive pragmatic component to human linguistic interaction.But we hold that pragmatic inference makes use of a logically prior grammatical and semantic analysis. This can be fruitfully modeled and exploited even in the complete absence of any modeling of pragmatic inferencing capability. However, this does not entail an incompatibility between our work and research on modeling discourse organization and conversational interaction directly=Ultimately, a successful language understanding system wilt require both kinds of research, combining the advantages of precise, grammar-driven analysis of utterance structure and pragmatic inferencing based on discourse structures and knowledge of the world. We stress, however, that our concerns at this stage do not extend beyond the specification of a system that can efficiently extract literal meaning from isolated sentences of arbitrarily complex grammatical structure.Future systems will exploit the literal meaning thus extracted in more ambitious applications that involve pragmatic reasoning and discourse manipulation.The system embodies two features that simultaneously promote extensibility, facilitate modification, and increase efficiency. The first is that its grammar is context-free in the informal sense sometimes (rather misleadingly) used in discussions of the autonomy of grammar and pragmatics:the syntactic rules and the semantic translation rules are independent of the specific application domain. Our rules are not devised ad hoc with a particular application or type of interaction in mind.Instead, they are motivated by recent theoretical developments in natural language syntax, and evaluated by the usual linguistic canons of simplicity and generality. No changes in the knowledge base or other exigencies deriving from a particular context of application can introduce a problem for the grammar (as distinct, of course, from the lexicon).The second relevant feature is that the grammar ir the-system is context-free in the sense of formal language theory. This makes the extensive mathematical literature on context-free phrase structure grammars (CF-PSG's) directly relevant to the enterprise, and permits utilization of all the well-known techniques for the computational implementation of context-free grammars.It might seem anachronistic to base a language understanding system on context-free parsing.As Pratt (1975, 423) observes: "It is fashionable these days to want to avoid all reference to context-free grammars beyond warning students that they are unfit for computer consumption as far as computational linguistics is concerned." Moreover, widely accepted arguments have been given in the linguistics literature to the effect that some human languages are not even weakly context-free and thus cannot possibly be described by a CF-PSG.However, Gazdar and Pullum (1982) answer all of these arguments, showing that they are either formally invalid or empirically unsupported or both. It seems appropriate, therefore, to take a renewed interest in the possibility of CF-PSG description of human languages, both in computational linguistics and in linguistic research generally. | The semantics handler uses the translation rule associated with a node to construct its semantics from the semantics of its daughters. This construction makes crucial use of a procedure that we call Cooper storage (after Robin Cooper; see below).In the spirit of current research in formal semantics, each syntactic constituent is associated directly with a single logic expression (modulo Cooper Storage), rather than any program or procedure for producing such an expression. Our semantic analysis thus embraces the principle of "surface compositionality." The semantic representations derived at each node are referred to as the Logical Representation (LR).The disambiguator provides the crucial transition from LR to HIRoE queries; the disambiguator uses information about the sort, or domoin of definition, of various terms in the logical representation.One of the most important functions of the disambiguator is to eliminate parses that do not make sense in the conceptual scheme of HIRE.HIRE is a relational database with a certain amount of inferencin9 capability.It is implemented in SPHERE, a database system which is a descendant of FOL (described in Weyhrauch (1980)).Many of the relation-names output by the disambiguator are derived relations defined by axioms in SPHERE.The SPHERE environment was important for this application, since it was essential to have something that could process first-order logical output, and SPHERE does just that.A noticeable recent trend in database theory has been a move toward an interdisciplinary comingling of mathematical logic and relational database technology (see especially Gallaire and Minker (1978) and Gallaire, Minker and Nicolas (198])).We regard it as an important fact about the GPSG system that links computational linguistics to first-order logical representation just as the work referred to above has linked first-order logic to relational database theory. We believe that SPHERE offers promising prospects for a knowledge representation system that is principled and general in the way that we have tried to exemplify in our syntactic and semantic rule system.Filman, Lamping and Montalvo (]982) present details of some capabilities of SPHERE that we have not as yet exploited in our work, involving the use of multiple contexts to represent viewpoints, beliefs, and modalities, which are generally regarded as insuperable stumbling-blocks to first-order logic approaches.Thus far the linguistic work we have described has been in keeping with GPSG presented in the papers cited above. However two semantic innovations have been introduced to facilitate the disambiguator's translation from LR to a HIRE query.As a result the linguistic system version of LR has two new properties:(1) The intensional logic of the published work was set aside and LR was designed to be an extensional first-order language. Although constituent translations built up on the way to a root node may be second-order, the systemmaintains first-order reducibility. This reducibility is illustrated by the following analysis of noun phrases as second-order properties (essentially the analysis of Montague (]970)). For example, the proper name Egon and the quantified noun phrase every opplicant are both translated as sets of properties: Egon = LAMBDA P (P EGON) Every applicant = LAMBDA P (FORALL X ((APPLICANT X) --> (P X)))Egon is translated as the set of properties true of Egon, and every applicant, as the set of properties true of all applicants.Since basic predicates in the logic are first-order, neither of the above expressions can be made the direct • argument of any basic predicate;instead the argument is some unique entity-level variable which is later bound to the quantifier-expression by quantifying in.This technique is essentially the storage device proposed in Cooper (1975) . One advantage of this method of "deferring" the introduction into the interpretation process of phrases with quantifier meanings is that it allows for a natural, nonsyntactic treatment of scope ambiguities. Another is that with a logic limited to first-order predicates, there is still a natural treatment for coordinated noun phrases of apparently heterogeneous semantics, such as Egon and every applicant.(2) HIRE represents events as objects. I n order to accomodate this many-to-many mapping between a verb and particular relations in a knowledge base, the lexicon stipulates special relations that link a verb to its eventual arguments.Following Fillmore (1968), these mediating relations are called case roles.The disambiguator narrows the case roles down to specific knowledge base relations.To take a simple example, Anne works for HP has a logical representation reducible to:(EXISTS SIGMA (AND (EMPLOYMENT SIGMA) (AG SIGMA ANNE) (LOC SIGMA HP)))Here SIGMA is a variable over situations or event instantiations, s The formula may be read, "There is an employment-situation whose Agent is Anne and whose Location is HP." The lexical entry for work supplies the information that its subject is an Agent and its complement a Location. The disambiguator now needs to further specify the case roles as HIRE relations.It does this by treating each atomic formula in the expression locally, using the fact that Anne is a person in order to interpret AG, and the fact that HP is an organization in order to interpret LOC. In this case, it interprets the AG role as employment.employee and the LOC role as employment.organization.The advantages of using the roles in Logical Representation, rather than going directly to predicates in a knowledge base, include (1) the ability to interpret at least some prepositional phrases, those known as adjuncts, without subcategorizing verbs specially for them, since the case role may be supplied either by a verb or a preposition.(2) the option of interpreting 'vague' verbs such as have and give using case roles without event types. These verbs, then, become "purely" relational. representations, and to make all knowledge base-specific predicates and relations the exclusive province of the disambiguator.One important means to that end is case roles, which allow us a level of abstract, purely "linguistic" relations to mediate between logical representations and HIRE queries.Another is the use of general event types such as labor, to replace event-types specific to HIRE, such as employments.The case roles maintain a separation between the domain representation language and LR. Insofar as that separation is achieved, then absolute portability of the system, up to and including the lexicon, is an attainable goal.Absolute portability obviously has immediate practical benefits for any system that expects to handle a large fragment of English, since the effort in moving from one application to another will be limited to "tuning" the disambiguator to a new ontology, and adding "specialized" vocabulary. The actual rules governing the production of first-order logical representations make no reference to the facts of HIRE.The question remains of just how portable the current lexicon is; the answer is that much of it is already domain independent. Quantifiers like every (as we saw in the discussion of NP semantics) are expressed as logical constants;verbs like give are expressed entirely in terms of the case relations that hold among their arguments.Verbs like work can be abstracted away from the domain by a simple extension.The obvious goal is to try to give domain independent representations to a core vocabulary of English that could be used in a variety of application domains. | We shall now give a slightly more detailed illustration of how the syntax and compositional semantics rules work.We are still simplifying considerably, since we have selected an example where rote frames are not involved, and we are not employing features on nodes.Here we have the grammar of a trivial subset of English: The syntax of a lexical entry is <L: C: T>, where L is the spelling of the item, C is its grammatical category and feature specification (if other than the default set) and T is its translation into LR.Consider how we assign an LR to a sentence like Every applicant is competent. The translation of every supplies most of the structure of the universal quantification needed in LR. It represents a function from properties to functions from properties to truth values, so when applied to applicant it yields a constituent, namely every applicant, which has one of the property slots filled, and represents a function from properties to truth-values; it is:(LAMBDA P (FORALL X ((APPLICANT X) IMPLIES (P X))))This function can now be applied to the function denoted by competent, i.e.This yields:(FORALL X ((APPLICANT X) IMPLIES (LAMBDA Y (EXPERT.LEVEL HIGH Y)) X))And after one more lambda-conversion, we have: 1 shows one parse tree that would be generated by the above rules, together with its logical translation. The sentence is Bill interviewed every applicant.( FORALL X ((APPLICANT X) IMPLIES (EXPERT.LEVEL HIGH X))) Fig.The complicated translation of the VP is necessary because INTERVIEW is a one-place predicate that takes an entity-type argument, not the type of function that every applicant denotes.We thus defer combining the NP translation with the verb by using Cooper storage. A translation with a stored NP is represented above in angle-brackets. Notice that at the S node the NP every applicant is still stored, but the subject is not stored.It has directly combined with the VP, by taking the VP as an argument.INTERVIEW is itself a two-place predicate, but one of its argument places has been filled by a place-holding variable, X1.There is th~Js ~ only one slot left.The translation can now be completed via the operations of Storage Retrieval and lambda conversion. First, we simplify the part of the semantics that isn't in storage: The function (LAMBDA P (P BILL)) has been evaluated with P set to the value (INTERVIEW X1); this is a. conventional lambda-conversion. The rule for storage retrieval is to make a one-place predicate of the sentence translation by lambda-binding the placeholding variable, and then to apply the NP translation as a function to the result. The S-node translation above becomes: This is the desired final result. | null | The linguistic basis of the GPSG linguistic system resides in the work reported in Gazdar (1981, 1982) and Gazdar, Pullum, and Sag (1981) . 1 These papers argue on empirical and theoretical grounds that context-freeness is a desirable constraint on grammars.It clearly would not be so desirable, however, if (1) it led to lost generalizations or (2) it resulted in an unmanageable number of rules in the grammar. Gazdar (1982) proposes a way of simultaneously avoiding these two problems. Linguistic generalizations can be captured in a context-free grammar with a metagrammor, i.e. a higher-level grammar that generates the actual grammar as its language.The metagrammar has two kinds of statements:(1) Rule schemata. These are basically like ordinary rules, except that they contain variables ranging over categories and features.(2) Metarules.These are implicational statements, written in the form ===>B, which capture relations between rules.A metarule ===>t~ is interpreted as saying, "for every rule that is an instantiation of the schema =, there is a corresponding rule of form [5. " Here 13 will be @(~), where 8 issome mapping specified partly by the general theory of grammar and partly in the metarule formulation.For instance, it is taken to be part of the theory of grammar that @ preserves unchanged the subcategorization (rule name) features of rules (cf. below). In our terms the latter is information linking lexical items of a particular category to specific environments in which that category is introduced by phrase structure rules.Presence in the lexical entry for an item I of the feature R (where R is the name of a rule) indicates that / may appear in structures admitted by R, and absence indicates that it may not.The semantic information in a lexical entry is sometimes simple, directly linking a lexical item with some HIRE predicate or relation.With verbs or prepositions, there is also a specification of what case roles to associate with particular arguments (cf. below for discussion of case roles). Expressions that make a complex logical contribution to the sentence in which they appear witl in general have complicated translations. Thus every has the translation-2. There is a theoretical issue here about whether semantic translation rules need to be stipulated for each syntactic rule or whether there is a general way of predicting their form. See Klein and Sag (t981) for an attempt to develop the latter view, which is not at present implemented in our system.(LAMBDA P (LAMBDA Q ((FORALL X (P X)) --> (Q x)))),This indicates that it denotes a function which takes as argument a set P, and returns the set of properties that are true of all members of that set (cf. below for slightly more detailed discussion).A typical rule looks like this:<VPI09: V] -> V N]! N!I2: ((V N!!2) N!!)>The exclamation marks here are our notation for the bars in an X-bar category system.(See Jackendoff (1977) for a theory of this type--though one which differs on points of detail from ours.)The rule has the form <a: b: c>. Here a is the name 'VP109'; b is a condition that will admit a node labeled 'V!' if it has three daughter nodes labeled respectively 'V' (verb), 'Nit' (noun phrase at the second bar level), and 'NI!' (the numeral 2 being merely an index to permit reference to a specific symbol in the semantics, the metarules, and the rule compiler, and is not a part of the category label);and c is a semantic translation rule stating that the V constituent translates as a function expression taking as its argument the translation of the second N!!, the result being a function expression to be applied to the translation of the first N!!.the rule name is one of the feature values marked on the lexical head of any rule that introduces a lexical category (as this one introduces V).Only verbs marked with that feature value satisfy this rule. For example, if we include in the lexicon the word give and assign to it the feature VPI09, then this rule would generate the verb phrase gave Anne a job.A typical metarule is the passive metarule, which looks like this (ignoring semantics):<PAS: <V! -> V NI! W > => <V! -> V[PAS] W>>W is a string variable ranging over zero or more category symbols. The metarule has the form <N: <A> => <B>>, where N is a name and <A> and <B > are schemata that have rules as their instantiations when appropriate substitutions are made for the free variables.This metarule says that for every rule that expands a verb phrase as verb followed by noun phrase followed by anything else (including nothing else), there is another rule that expands verb phrase as verb with passive morphology followed by whatever followed the noun phrase in the given rule. The metarule PAS would apply to grammar rule VP109 given above, yielding the rule:<VP109: V! -> V[PAS] N{!>As we noted above, the rule number feature is preserved here, so we get Anne was given a job, where the passive verb phrase is given a job, but not *Anne was hired a job. 3Passive sentences are thus analyzed directly, and not reduced to the form of active sentences in the course of being analyzed, in the way that is familiar from work on transformational grammars and on ATN's.However, this does not mean that no relation between passives and their active counterparts is expressed in the system, because the rules for analyzing passives are in a sense derivatively defined on the basis of' rules for analyzing actives.More difficult than treating passives and the like, and often cited as literally impossible within a context-free grammar'," is treating constructions like questions and relative clauses.The apparent difficulty resides in the fact that in a question like The problem is thus one of guaranteeing a grammatical dependency across a context that may be arbitrarily wide, while keeping the grammar context-free. The technique used is introduced into the linguistic literature by Gazdar (1981) .It involves an augmentation of the nonterminal vocabulary of the grammar that permits constituents with "gaps" to be treated as not belonging to the same category as similar constituents without gaps.This would be an unwelcome and inelegant enlargement of the grammar if it had to be done by means of case-by-case stipulation, but again the use of a metagrammar avoids this.Gazdar (1981) and therefore defines rules that allow for actual gaps--i.e., missing constituents. In this way, complete sets of rules for describing the unbounded dependencies found in interrogative and relative clauses can readily be written.Even long-distance agreement facts can be (and are) captured, since the morphosyntactic features relevant to a specific case of agreement are present in the feature composition of any given ~'.The system is initialized by expanding out the grammar. That is, tile metarules are applied to the rules to produce the full rule set, which is then compiled and used by the parser.Metarules are not consulted during the process of parsing. One might well wonder about the possible benefits of the other alternative: a parser that made the metarule-derived rules to order each time they were needed, instead of consulting a precompiled list.This possibility has been explored by Kay (1982).Kay draws an analogy between metarules and phonological rules, modeling both by means of finite state transducers.We believe that this line is worth pursuing;however, the GPSG system currently operates off a precompiled set of rules.Application of ten metarules to forty basic rules yielded 283 grammar rules in the 1/1/82 version of the GPSG system.Since then the grammar has been expanded somewhat, though the current version is still undergoing some debugging, and the number of rules is unstable. The size of the grammar-plus-metarules system grows by a factor of five or six through the rule compilation.The great practical advantage of using a metarule-induced grammar is, therefore, that the work of designing and revising the system of linguistic rules can proceed on a body of statements that is under twenty percent of the size it would be if it were formulated as a simple list of context-free rules.The system uses a standard type of top-down parser with no Iookahead, augmented slightly to prevent it from looking for a given constituent starting in a given spot more than once.It produces, in parallel, all legal parse trees for a sentence, with semantic translations associated with each node.What we have outlined is a natural language system that is a direct implementation of a linguistic theory.We have argued that in this case the linguistic theory has the special appeal of computational tractability (promoted by its context-freeness), and that the system as a whole offers the hope of a happy marriage of linguistic theory, mathematical logic, and advanced computer applications.The system's theoretical underpinnings give it compatibility with current research in Generalized Phrase Structure Grammar, and its augmented first order logic gives it compatibility with a whole body of ongoing research in the field of model-theoretic semantics.The work done thus far is only the first step on the road to a robust and practical natural language processor, but the guiding principle throughout has been extensibility, both of the grammar, and of the applicability to various spheres of computation.Grateful acknowledgement is given to two brave souls, Steve Gadol and Bob Kanefsky, who helped give this system some of its credibility by implementing the actual hook-up with HIRE. Thanks are also due Robert Filman and Bert Raphael for helpful comments on an early version of this paper.And a special thanks is due Richard Weyhrauch, for encouragement, wise advice, and comfort in times of debugging. | Main paper:
components of the system:
The linguistic basis of the GPSG linguistic system resides in the work reported in Gazdar (1981, 1982) and Gazdar, Pullum, and Sag (1981) . 1 These papers argue on empirical and theoretical grounds that context-freeness is a desirable constraint on grammars.It clearly would not be so desirable, however, if (1) it led to lost generalizations or (2) it resulted in an unmanageable number of rules in the grammar. Gazdar (1982) proposes a way of simultaneously avoiding these two problems. Linguistic generalizations can be captured in a context-free grammar with a metagrammor, i.e. a higher-level grammar that generates the actual grammar as its language.The metagrammar has two kinds of statements:(1) Rule schemata. These are basically like ordinary rules, except that they contain variables ranging over categories and features.(2) Metarules.These are implicational statements, written in the form ===>B, which capture relations between rules.A metarule ===>t~ is interpreted as saying, "for every rule that is an instantiation of the schema =, there is a corresponding rule of form [5. " Here 13 will be @(~), where 8 issome mapping specified partly by the general theory of grammar and partly in the metarule formulation.For instance, it is taken to be part of the theory of grammar that @ preserves unchanged the subcategorization (rule name) features of rules (cf. below). In our terms the latter is information linking lexical items of a particular category to specific environments in which that category is introduced by phrase structure rules.Presence in the lexical entry for an item I of the feature R (where R is the name of a rule) indicates that / may appear in structures admitted by R, and absence indicates that it may not.The semantic information in a lexical entry is sometimes simple, directly linking a lexical item with some HIRE predicate or relation.With verbs or prepositions, there is also a specification of what case roles to associate with particular arguments (cf. below for discussion of case roles). Expressions that make a complex logical contribution to the sentence in which they appear witl in general have complicated translations. Thus every has the translation-2. There is a theoretical issue here about whether semantic translation rules need to be stipulated for each syntactic rule or whether there is a general way of predicting their form. See Klein and Sag (t981) for an attempt to develop the latter view, which is not at present implemented in our system.(LAMBDA P (LAMBDA Q ((FORALL X (P X)) --> (Q x)))),This indicates that it denotes a function which takes as argument a set P, and returns the set of properties that are true of all members of that set (cf. below for slightly more detailed discussion).A typical rule looks like this:<VPI09: V] -> V N]! N!I2: ((V N!!2) N!!)>The exclamation marks here are our notation for the bars in an X-bar category system.(See Jackendoff (1977) for a theory of this type--though one which differs on points of detail from ours.)The rule has the form <a: b: c>. Here a is the name 'VP109'; b is a condition that will admit a node labeled 'V!' if it has three daughter nodes labeled respectively 'V' (verb), 'Nit' (noun phrase at the second bar level), and 'NI!' (the numeral 2 being merely an index to permit reference to a specific symbol in the semantics, the metarules, and the rule compiler, and is not a part of the category label);and c is a semantic translation rule stating that the V constituent translates as a function expression taking as its argument the translation of the second N!!, the result being a function expression to be applied to the translation of the first N!!.the rule name is one of the feature values marked on the lexical head of any rule that introduces a lexical category (as this one introduces V).Only verbs marked with that feature value satisfy this rule. For example, if we include in the lexicon the word give and assign to it the feature VPI09, then this rule would generate the verb phrase gave Anne a job.A typical metarule is the passive metarule, which looks like this (ignoring semantics):<PAS: <V! -> V NI! W > => <V! -> V[PAS] W>>W is a string variable ranging over zero or more category symbols. The metarule has the form <N: <A> => <B>>, where N is a name and <A> and <B > are schemata that have rules as their instantiations when appropriate substitutions are made for the free variables.This metarule says that for every rule that expands a verb phrase as verb followed by noun phrase followed by anything else (including nothing else), there is another rule that expands verb phrase as verb with passive morphology followed by whatever followed the noun phrase in the given rule. The metarule PAS would apply to grammar rule VP109 given above, yielding the rule:<VP109: V! -> V[PAS] N{!>As we noted above, the rule number feature is preserved here, so we get Anne was given a job, where the passive verb phrase is given a job, but not *Anne was hired a job. 3Passive sentences are thus analyzed directly, and not reduced to the form of active sentences in the course of being analyzed, in the way that is familiar from work on transformational grammars and on ATN's.However, this does not mean that no relation between passives and their active counterparts is expressed in the system, because the rules for analyzing passives are in a sense derivatively defined on the basis of' rules for analyzing actives.More difficult than treating passives and the like, and often cited as literally impossible within a context-free grammar'," is treating constructions like questions and relative clauses.The apparent difficulty resides in the fact that in a question like The problem is thus one of guaranteeing a grammatical dependency across a context that may be arbitrarily wide, while keeping the grammar context-free. The technique used is introduced into the linguistic literature by Gazdar (1981) .It involves an augmentation of the nonterminal vocabulary of the grammar that permits constituents with "gaps" to be treated as not belonging to the same category as similar constituents without gaps.This would be an unwelcome and inelegant enlargement of the grammar if it had to be done by means of case-by-case stipulation, but again the use of a metagrammar avoids this.Gazdar (1981) and therefore defines rules that allow for actual gaps--i.e., missing constituents. In this way, complete sets of rules for describing the unbounded dependencies found in interrogative and relative clauses can readily be written.Even long-distance agreement facts can be (and are) captured, since the morphosyntactic features relevant to a specific case of agreement are present in the feature composition of any given ~'.
parsing:
The system is initialized by expanding out the grammar. That is, tile metarules are applied to the rules to produce the full rule set, which is then compiled and used by the parser.Metarules are not consulted during the process of parsing. One might well wonder about the possible benefits of the other alternative: a parser that made the metarule-derived rules to order each time they were needed, instead of consulting a precompiled list.This possibility has been explored by Kay (1982).Kay draws an analogy between metarules and phonological rules, modeling both by means of finite state transducers.We believe that this line is worth pursuing;however, the GPSG system currently operates off a precompiled set of rules.Application of ten metarules to forty basic rules yielded 283 grammar rules in the 1/1/82 version of the GPSG system.Since then the grammar has been expanded somewhat, though the current version is still undergoing some debugging, and the number of rules is unstable. The size of the grammar-plus-metarules system grows by a factor of five or six through the rule compilation.The great practical advantage of using a metarule-induced grammar is, therefore, that the work of designing and revising the system of linguistic rules can proceed on a body of statements that is under twenty percent of the size it would be if it were formulated as a simple list of context-free rules.The system uses a standard type of top-down parser with no Iookahead, augmented slightly to prevent it from looking for a given constituent starting in a given spot more than once.It produces, in parallel, all legal parse trees for a sentence, with semantic translations associated with each node.
semantics:
The semantics handler uses the translation rule associated with a node to construct its semantics from the semantics of its daughters. This construction makes crucial use of a procedure that we call Cooper storage (after Robin Cooper; see below).In the spirit of current research in formal semantics, each syntactic constituent is associated directly with a single logic expression (modulo Cooper Storage), rather than any program or procedure for producing such an expression. Our semantic analysis thus embraces the principle of "surface compositionality." The semantic representations derived at each node are referred to as the Logical Representation (LR).The disambiguator provides the crucial transition from LR to HIRoE queries; the disambiguator uses information about the sort, or domoin of definition, of various terms in the logical representation.One of the most important functions of the disambiguator is to eliminate parses that do not make sense in the conceptual scheme of HIRE.HIRE is a relational database with a certain amount of inferencin9 capability.It is implemented in SPHERE, a database system which is a descendant of FOL (described in Weyhrauch (1980)).Many of the relation-names output by the disambiguator are derived relations defined by axioms in SPHERE.The SPHERE environment was important for this application, since it was essential to have something that could process first-order logical output, and SPHERE does just that.A noticeable recent trend in database theory has been a move toward an interdisciplinary comingling of mathematical logic and relational database technology (see especially Gallaire and Minker (1978) and Gallaire, Minker and Nicolas (198])).We regard it as an important fact about the GPSG system that links computational linguistics to first-order logical representation just as the work referred to above has linked first-order logic to relational database theory. We believe that SPHERE offers promising prospects for a knowledge representation system that is principled and general in the way that we have tried to exemplify in our syntactic and semantic rule system.Filman, Lamping and Montalvo (]982) present details of some capabilities of SPHERE that we have not as yet exploited in our work, involving the use of multiple contexts to represent viewpoints, beliefs, and modalities, which are generally regarded as insuperable stumbling-blocks to first-order logic approaches.Thus far the linguistic work we have described has been in keeping with GPSG presented in the papers cited above. However two semantic innovations have been introduced to facilitate the disambiguator's translation from LR to a HIRE query.As a result the linguistic system version of LR has two new properties:(1) The intensional logic of the published work was set aside and LR was designed to be an extensional first-order language. Although constituent translations built up on the way to a root node may be second-order, the systemmaintains first-order reducibility. This reducibility is illustrated by the following analysis of noun phrases as second-order properties (essentially the analysis of Montague (]970)). For example, the proper name Egon and the quantified noun phrase every opplicant are both translated as sets of properties: Egon = LAMBDA P (P EGON) Every applicant = LAMBDA P (FORALL X ((APPLICANT X) --> (P X)))Egon is translated as the set of properties true of Egon, and every applicant, as the set of properties true of all applicants.Since basic predicates in the logic are first-order, neither of the above expressions can be made the direct • argument of any basic predicate;instead the argument is some unique entity-level variable which is later bound to the quantifier-expression by quantifying in.This technique is essentially the storage device proposed in Cooper (1975) . One advantage of this method of "deferring" the introduction into the interpretation process of phrases with quantifier meanings is that it allows for a natural, nonsyntactic treatment of scope ambiguities. Another is that with a logic limited to first-order predicates, there is still a natural treatment for coordinated noun phrases of apparently heterogeneous semantics, such as Egon and every applicant.(2) HIRE represents events as objects. I n order to accomodate this many-to-many mapping between a verb and particular relations in a knowledge base, the lexicon stipulates special relations that link a verb to its eventual arguments.Following Fillmore (1968), these mediating relations are called case roles.The disambiguator narrows the case roles down to specific knowledge base relations.To take a simple example, Anne works for HP has a logical representation reducible to:(EXISTS SIGMA (AND (EMPLOYMENT SIGMA) (AG SIGMA ANNE) (LOC SIGMA HP)))Here SIGMA is a variable over situations or event instantiations, s The formula may be read, "There is an employment-situation whose Agent is Anne and whose Location is HP." The lexical entry for work supplies the information that its subject is an Agent and its complement a Location. The disambiguator now needs to further specify the case roles as HIRE relations.It does this by treating each atomic formula in the expression locally, using the fact that Anne is a person in order to interpret AG, and the fact that HP is an organization in order to interpret LOC. In this case, it interprets the AG role as employment.employee and the LOC role as employment.organization.The advantages of using the roles in Logical Representation, rather than going directly to predicates in a knowledge base, include (1) the ability to interpret at least some prepositional phrases, those known as adjuncts, without subcategorizing verbs specially for them, since the case role may be supplied either by a verb or a preposition.(2) the option of interpreting 'vague' verbs such as have and give using case roles without event types. These verbs, then, become "purely" relational. representations, and to make all knowledge base-specific predicates and relations the exclusive province of the disambiguator.One important means to that end is case roles, which allow us a level of abstract, purely "linguistic" relations to mediate between logical representations and HIRE queries.Another is the use of general event types such as labor, to replace event-types specific to HIRE, such as employments.The case roles maintain a separation between the domain representation language and LR. Insofar as that separation is achieved, then absolute portability of the system, up to and including the lexicon, is an attainable goal.Absolute portability obviously has immediate practical benefits for any system that expects to handle a large fragment of English, since the effort in moving from one application to another will be limited to "tuning" the disambiguator to a new ontology, and adding "specialized" vocabulary. The actual rules governing the production of first-order logical representations make no reference to the facts of HIRE.The question remains of just how portable the current lexicon is; the answer is that much of it is already domain independent. Quantifiers like every (as we saw in the discussion of NP semantics) are expressed as logical constants;verbs like give are expressed entirely in terms of the case relations that hold among their arguments.Verbs like work can be abstracted away from the domain by a simple extension.The obvious goal is to try to give domain independent representations to a core vocabulary of English that could be used in a variety of application domains.
an example:
We shall now give a slightly more detailed illustration of how the syntax and compositional semantics rules work.We are still simplifying considerably, since we have selected an example where rote frames are not involved, and we are not employing features on nodes.Here we have the grammar of a trivial subset of English: The syntax of a lexical entry is <L: C: T>, where L is the spelling of the item, C is its grammatical category and feature specification (if other than the default set) and T is its translation into LR.Consider how we assign an LR to a sentence like Every applicant is competent. The translation of every supplies most of the structure of the universal quantification needed in LR. It represents a function from properties to functions from properties to truth values, so when applied to applicant it yields a constituent, namely every applicant, which has one of the property slots filled, and represents a function from properties to truth-values; it is:(LAMBDA P (FORALL X ((APPLICANT X) IMPLIES (P X))))This function can now be applied to the function denoted by competent, i.e.This yields:(FORALL X ((APPLICANT X) IMPLIES (LAMBDA Y (EXPERT.LEVEL HIGH Y)) X))And after one more lambda-conversion, we have: 1 shows one parse tree that would be generated by the above rules, together with its logical translation. The sentence is Bill interviewed every applicant.( FORALL X ((APPLICANT X) IMPLIES (EXPERT.LEVEL HIGH X))) Fig.The complicated translation of the VP is necessary because INTERVIEW is a one-place predicate that takes an entity-type argument, not the type of function that every applicant denotes.We thus defer combining the NP translation with the verb by using Cooper storage. A translation with a stored NP is represented above in angle-brackets. Notice that at the S node the NP every applicant is still stored, but the subject is not stored.It has directly combined with the VP, by taking the VP as an argument.INTERVIEW is itself a two-place predicate, but one of its argument places has been filled by a place-holding variable, X1.There is th~Js ~ only one slot left.The translation can now be completed via the operations of Storage Retrieval and lambda conversion. First, we simplify the part of the semantics that isn't in storage: The function (LAMBDA P (P BILL)) has been evaluated with P set to the value (INTERVIEW X1); this is a. conventional lambda-conversion. The rule for storage retrieval is to make a one-place predicate of the sentence translation by lambda-binding the placeholding variable, and then to apply the NP translation as a function to the result. The S-node translation above becomes: This is the desired final result.
conclusion:
What we have outlined is a natural language system that is a direct implementation of a linguistic theory.We have argued that in this case the linguistic theory has the special appeal of computational tractability (promoted by its context-freeness), and that the system as a whole offers the hope of a happy marriage of linguistic theory, mathematical logic, and advanced computer applications.The system's theoretical underpinnings give it compatibility with current research in Generalized Phrase Structure Grammar, and its augmented first order logic gives it compatibility with a whole body of ongoing research in the field of model-theoretic semantics.The work done thus far is only the first step on the road to a robust and practical natural language processor, but the guiding principle throughout has been extensibility, both of the grammar, and of the applicability to various spheres of computation.
introduction:
This paper is an interim progress report on linguistic research carried out at Hewlett-Packard Laboratories since the summer of 1981.The research had three goals: (1) demonstrating the computational tractability of Generalized Phrase Structure Grammar (GPSG), (2) implementing a GPSG system covering a large fragment of English, and (3) establishing the feasibility of using GPSG for interactions with an inferencing knowledge base.Section 2 describes the general architecture of the system. Section 3 discusses the grammar and the lexicon.A brief dicussion of the parsing technique used in found in Section 4.Section 5 discusses the semantics of the system, and Section 6 presents ~ detailed example of a parse-tree complete with semantics.Some typical examples that the system can handle are given in the Appendix.The system is based on recent developments in syntax and semantics, reflecting a modular view in which grammatical structure an~ abstract logical structure have independent status. The understanding of a sentence occurs in a number of stages, distinct from each other and governed by different principles of organization. We are opposed to the idea that language understanding can be achieved without detailed syntactic analysis.There is, of course, a massive pragmatic component to human linguistic interaction.But we hold that pragmatic inference makes use of a logically prior grammatical and semantic analysis. This can be fruitfully modeled and exploited even in the complete absence of any modeling of pragmatic inferencing capability. However, this does not entail an incompatibility between our work and research on modeling discourse organization and conversational interaction directly=Ultimately, a successful language understanding system wilt require both kinds of research, combining the advantages of precise, grammar-driven analysis of utterance structure and pragmatic inferencing based on discourse structures and knowledge of the world. We stress, however, that our concerns at this stage do not extend beyond the specification of a system that can efficiently extract literal meaning from isolated sentences of arbitrarily complex grammatical structure.Future systems will exploit the literal meaning thus extracted in more ambitious applications that involve pragmatic reasoning and discourse manipulation.The system embodies two features that simultaneously promote extensibility, facilitate modification, and increase efficiency. The first is that its grammar is context-free in the informal sense sometimes (rather misleadingly) used in discussions of the autonomy of grammar and pragmatics:the syntactic rules and the semantic translation rules are independent of the specific application domain. Our rules are not devised ad hoc with a particular application or type of interaction in mind.Instead, they are motivated by recent theoretical developments in natural language syntax, and evaluated by the usual linguistic canons of simplicity and generality. No changes in the knowledge base or other exigencies deriving from a particular context of application can introduce a problem for the grammar (as distinct, of course, from the lexicon).The second relevant feature is that the grammar ir the-system is context-free in the sense of formal language theory. This makes the extensive mathematical literature on context-free phrase structure grammars (CF-PSG's) directly relevant to the enterprise, and permits utilization of all the well-known techniques for the computational implementation of context-free grammars.It might seem anachronistic to base a language understanding system on context-free parsing.As Pratt (1975, 423) observes: "It is fashionable these days to want to avoid all reference to context-free grammars beyond warning students that they are unfit for computer consumption as far as computational linguistics is concerned." Moreover, widely accepted arguments have been given in the linguistics literature to the effect that some human languages are not even weakly context-free and thus cannot possibly be described by a CF-PSG.However, Gazdar and Pullum (1982) answer all of these arguments, showing that they are either formally invalid or empirically unsupported or both. It seems appropriate, therefore, to take a renewed interest in the possibility of CF-PSG description of human languages, both in computational linguistics and in linguistic research generally.
Appendix: Grateful acknowledgement is given to two brave souls, Steve Gadol and Bob Kanefsky, who helped give this system some of its credibility by implementing the actual hook-up with HIRE. Thanks are also due Robert Filman and Bert Raphael for helpful comments on an early version of this paper.And a special thanks is due Richard Weyhrauch, for encouragement, wise advice, and comfort in times of debugging.
| null | null | null | null | {
"paperhash": [
"cocchiarella|situations_and_attitudes.",
"pratt|lingol:_a_progress_repor",
"hsu|lingol-a_progress_report"
],
"title": [
"Situations and Attitudes.",
"LINGOL: a progress repor",
"LINGOL-A Progress Report"
],
"abstract": [
"In this provocative book, Barwise and Perry tackle the slippery subject of \"meaning, \" a subject that has long vexed linguists, language philosophers, and logicians.",
"A new parsing algorithm is described. It is intended for use with advice-taking (or augmented) phrase structure grammars of the type used by Woods, Simmons. Heidorn and the author. It has the property that it is guaranteed not to propose a phrase unless there exists a continuation of the sentence seen thus far, in which the phrase plays a role in some surface structure of that sentence. The context in which this algorithm constitutes a contribution to current issues in parsing methodology is discussed, and we present a case for reversing the current trend to ever more complex control structures in natural language systems.",
"the information in the two components is duplicated In fact, he can omit the generative component entirely and put everything in the cognitive component, though at some cost in resource consumption At run time, no f i rm commitment is made by the cognitive component to a particular choice of surface structure of an ambiguous sentence, allowing the generative component to pick and choose when the cognitive component has not had enough information to decide At present L I N C O L users are encouraged to try to make their cognitive component intelligent enough to make the right decision, and so far no L I N C O L programs have attempted disambiguation in the generative component. One would expect this to change as people attempt more sophisticated programs. A L I N C O L program is a set of rules each having three components: a context-free rule, a cognitive function and a generative function Their respective roles are as follows The CF rule specifies a general English construction, the cognitive component (or \"critic\") supplies the expertise about that construction and the generative component supplies the information about the target language that may be relevant to this English construction. (Our tacit assumption of English as the source language reflects L I N G O L ' s applications to date) It is fashionable these days to want to avoid alt reference to context-free grammars beyond warning students of computational linguistics that they are unfit for computer consumption as far as computational linguistics is concerned In L I N C O L , as in ATN's [Woods 1969], their role is dif ferent from that in, say, the Harvard Predictive Analyzer [Kuno 1965]. Instead of being used to encode all information about English, they form the basis of a pattern-directed non-deterministic programming language. Th i s strategy has several advantages (i) It allows the programmer to structure his program as a set of relatively self-contained modules, thereby decreasing the number of things he has to keep in his head at once when looking at a particular part of his program. ( i i ) It eliminates much of the testing-for-cases control structure the programmer would need in a non-pattern-driven"
],
"authors": [
{
"name": [
"N. Cocchiarella",
"J. Barwise",
"J. Perry"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"V. Pratt"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"Chun F. Hsu"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
}
]
}
],
"arxiv_id": [
null,
null,
null
],
"s2_corpus_id": [
"124893762",
"61900604",
"17905112"
],
"intents": [
[],
[],
[]
],
"isInfluential": [
false,
false,
false
]
} | null | 512 | 0.060547 | null | null | null | null | null | null | null | null |
bbebc24ebd5f678bb74ce62fa0236e8c11d9df2d | 8424668 | null | Dependencies of Discourse Structure on the Modality of Communication: Telephone vs. Teletype | A desirable long-range goal in building future speech understanding systems would be to accept the kind of language people spontaneously produce. We show that people do not speak to one another in the same way they converse in typewritten language. Spoken language is finer-grained and more indirect. The differences are striking and pervasive. Current techniques for engaging in typewritten dialogue will need to be extended to accomodate the structure of spoken language. | {
"name": [
"Cohen, Philip R. and",
"Fertig, Scott and",
"Starr, Kathy"
],
"affiliation": [
null,
null,
null
]
} | null | null | 20th Annual Meeting of the Association for Computational Linguistics | 1982-06-01 | 21 | 8 | null | If a machine could listen, how would we talk to it? Tnis question will be hard to answer definitively until a good mechanical listener is developed.As a next best approximation, this paper presents results of an exploration of how people talk to one another in a domain for which keyboard-based natural language dialogue systems would be desirable, and have already been built (Robinson et al., 1980; Winograd, 1972) .Our observations are based on transcripts of person-to-person telephone-mediated and teletype-mediated dialogues. In these transcripts, one specific kind of communicative act dominates spoken task-related discourse, but is nearly absent from keyboard discourse. Importantly, when this act is performed vocally it is never performed directly. Since most of the utterances in these simple dialogues do not signal the speaker's intent, techniques for inferring intent will be crucial for engaging in spoken task-related discourse. The paper suggests how a plan-based theory of communication (Cohen and Perrault, 1979; Perrault and Allen, 1980) can uncover the intentions underlying the use of various forms.research was supported by the National Institute of Education under contract US-NIE-C-400-76-0116 to the Center for the Study of Reading of the University of Illinois and Bolt, Beranek and Newman, Inc.Motivated by Rubin's (1980) taxonomy of language experiences and influenced by Chapanis et al.'s (1972 Chapanis et al.'s ( , 1977 and Grosz' (1977) communication mode and task-oriented dialogue studies, we conducted an exploratory study to investigate how the structure of instruction-giving discourse depends on the communication situation in which it takes place.Twenty-five subjects ("experts") each instructed a randomly chosen "apprentice" in assembling a toy water pump.All subjects were paid volunteer students from the Lhiversity of Illinois. Five "dialogues" took place in each of the following modalities: face-to-face, via telephone, teletype ("linked" CRT' s) , (non-interactive) audiotape, and (non-interactive) written.In all modes, the apprentices were videotaped as they followed the experts ' instructions.Telephone and Teletype dialogues were analyzed first since results would have implications for the design of speech understanding and production systems.Each expert participated in the experiment on two consecutive days, the first for training and the second for instructing an apprentice. Subjects playing the expert role ware trained by: following a set of assembly directions consisting entirely of imperatives, assembling the pump as often as desired, and then instructing a research assistant.This practice session took place face-to-face. Experts knew the research assistant already knew how to assemble the pump. Experts were given an initial statement of the purpose of the experiment, which indicated that communication would take place in one of a n~ber of different modes, but were not informed of which modality they would communicate in until the next day.In both modes, experts and apprentices were located in different rooms.Experts had a set of pump parts that, they were told, were not to be assembled but could be manipulated. In Telephone mode, experts communicated via a standard telephone and apprentices communicated through a speaker-phone, which did not need to be held and which allowed simultaneous two-way communication. Distortion of the expert's voice was apparent, but not measured. Subjects in "Teletype" (TTY) mode typed their co~mnunication on Elite Datamedia 1500 CRT terminals connected by the Telenet computer network to a computer at Bolt, Beranek and Newman, Inc. The terminals were "linked" so that whatever was typed on one would appear on the other. Simultaneous typing was possible and did occur• Subjects were informed that their typing would not appear simultaneously on either terminal. Response times averaged 1 to 2 seconds, with occasionally longer delays due to system load.The following are representative fragments of Telephone and Teletype discourse. "fit the blue cap over the tube end done put the little black ring into the large blue cap with the hiole in it... ok put the pink valve on the twD pegs in that blue cap...Communication in Telephone mode has a distinct pattern of "find thex" "put it into/onto/over the y", in which reference and predication are addressed in different steps.To relate these steps, more reliance is placed on strategies for signalling dialogue coherence, such as the use of pronouns.Teletype communication involves primarily the use of imperatives such as "put the x Into/onto/around the y".Typically, the first time each object (X) is mentioned in a TrY discourse is within a request for a physical action.This research aims to develop an adequate method for conducting discourse analysis that will be useful to the computational linguist. The method used here integrates psychological, linguistic, and formal approaches in order to characterize language use. Psychological methods are needed in setting up protocols that do not bias the interesting variables.Linguistic methods are needed for developing a scheme for describing the progress of a discourse. Finally, formal methods are essential for stating theories of utterance interpretation in context.To be more specific, we are ultimately interested in similarities and differences in utterance processing across modes, Utterance processing clearly depends on utterance form and the speaker ' s intent. The utterances in the transcripts are therefore categorized by the intentions they are used to achieve. Both utterances and categorizations become data for cross-modal measures as well as for formal methods. Once intentions differing across modes are isolated, our strategy is to then examine the utterance forms used to achieve those intentions. Thus, utterance forms are not compared directly across modes; only utterances used to achieve the same goals are compared, and it is those goals that are expected to vary across modes. With form and function identified, one can then proceed to discuss how utterance processing may differ from one mode to another.Our plan-based theory of speech acts will be used to explain how an utterance's intent coding can be derived from the utterance's form and the prior interaction.A computational model of intent recognition in dialogue (Al~en, 1979; Cohen, 1979; Sidner et al., 1981) can then be used to mimic the theory's assignment of intent. Thus, the theory of speech act interpretation will describe language use in a fashion analogous to the way that a generative grammar describes how a particular deep structure can underlie a given surface structure.The first stage of discourse analysis involved the coding of the conm~unicator's intent in making various utterances• Since attributions of intent are hard to make reliably, care was taken to avoid biasing the results. Following the experiences of Sinclair and Coulthard (1975) , Dote et al. (1978) and Mann et al. (1975) , a coding scheme was developed and two people trained in its use.The coders relied both on written transcripts and on videotapes of the apprentices' assembly.The scheme, which was tested and revised on pilot data until reliability was attained, included a set of approximately 20 "speech act" categories that ware used to label intent, and a set of "operators" and propositions that were used to describe the assembly task, as in (Sacerdoti, 1975) .The operators and propositions often served as the propositional content of the communicative acts. In addition to the domain actions, pilot data led us to include an action of "physically identifying the referent of a description" as part of the scheme (Cohen, 1981) . This action will be seen to be requested explicitly by Telephone experts, but not by experts in Teletype mode.Of course, a coding scheme must not only capture the domain of discourse, it must be tailored to the nature of discourse per se. Many theorists have observed that a speaker can use a ntmber of utterances to achieve a goal, and can use one utterance to achieve a number of goals. Correspondingly, the coders could consider utterances as jointly achieving one intention (by "bracketing" them), could place an utterance in multiple categories, and could attribute more than one intention to the same utterance or utterance part.It was discovered that the physical layout of a transcript, particularly the location of line breaks, affected which utterances were coded.To ensure uniformity, each coder first divided each transcript into utterances that he or she would code.These joint "bracketings" were compared by a third party to yield a base set of codable (sic) utterance parts. The coders could later bracket utterances differently if necessary.The first attempt to code the transcripts was overly ambitious --coders could not keep 20 categories and their definitions in mind, even with a written coding manual for reference. Our scheme was then scaled back --only utterances fitting the following categories were considered:Requests-for-assembly-actions (RAACT) (e.g., "put that on the hole".)Requests-for-orientation-actions (RORT) (e.g., "the other way around", "the top is the bottom". )Requests-to-pick-up (RPUP) (e.g., "take the blue base".)Requests-for-identification (RID) (e.g., "there is a little yellow rubber".) piece oRequests-for-other (ROTH) (e.g., requests for repetition, requests to stop, etc.)Inform-completion(action) (e.g., "OK", "yeah", "got it".)Label (e.g., "that's a plunger")Interrater reliabilities for each category (within each mode), measured as the nunber of agreements X 2 divided by the ntmber of times that category was coded, ware high (above 90%).Since each disagreement counted twice (against both categories that ware coded), agreements also counted twice.Since most of each dialogue consisted of the making of requests, the first analysis examined the frequency of the various kinds of requests in the corpus of five transcripts for each modality. 's (1972, 1977) finding that voice modes were about "twice as wordy" as non-voice modes. Here, there are approximately twice as many requests in Telephone mode as Teletype.Chapenis et al. examined how linguistic behavior differed across modes in terms of measures of sentence length, message length, ntm~ber of words, sentences, messages, etc.contrast, the present study provides evidence of how these modes differ in utterance function.Identification requests are much more frequent in Telephone dialogues than in Teletype conversations.In fact, they constitute the largest category of requests--fully 35%.Since utterances in the RORT, RPUP, and ROTH categories will often be issued to clarify or follow up on a previous request, it is not surprising they would increase in number (though not percentage) with the increase in RID usage. Furthermore, it is sensible that there are about the same number of requests for assembly actions (and hence half the percentage) in each mode since the same "assembly wDrk" is accomplished. ~t~rufore, identification requests seem to be the primary request differentiating the two modalities. However, frequency data include mistakes, dialogue repairs, and repetition. Perhaps identification requests occur primarily after referential misco~unication (as occurs for teletype dialogues (Cohen, 1981)). One might then argue that people would speak more carefully to machines and thus would not need to use identification requests frequently. Alternatively, the use of such requests as a step in a Telephone speaker's plan may truly be a strategy of engaging in spoken task-related discourse that is not found in TI~ discourse.To explore when identification requests were used, a second analysis of the utterance codings was undertaken that was limited to "first time" identifications.Each time a novice (rightly or wrongly) first identified a piece, the communicative act that caused him/her to do so was indicated. However, a coding was counted only if that speech act was not jointly present with another prior to the novice's part identification attempt. Table II indicates the results for each subject in Telephone and Teletype modes. 1 9 2 1 2 1 i0 1 3 ii 1 0 4 9 1 0 5 i0 0 0 RID RPUP RAACT 1 2 9 0 2 9 1 2 9 0 6 3 2 6 4 Subjects were classifed as habitual users of a communicative act if, out of 12 pieces, the subject "introduced" at least 9 of the pieces with that act. In Telephone mode, four of five experts were habitual users of identification requests to get the apprentice to find a piece. In Teletype mode, no experts were habitual users of that act.To show a "modality effect" in the use of the identification request strategy, the ntmber of habitual users of RID in each mode were subjected to the Fischer's exact probability test (hypergeometric).Even with 5 subjects per mode, the differences across modes are significant (p = 0.023), indicating that Telephone conversation per se differs from Teletype conversation in the ways in which a speaker will make first reference to an object.ThUS far, explicit identification requests have been shown to be pervasive in Telephone mode and to constitute a frequently used strategy. One might expect that, in analogous circumstances, a machine might be confronted with many of these acts.Computational linguistics research then must discover means by which a machine can determine the appropriate response as a function, in part, of the form of the utterance. To see just which forms are used for our task, utterances classified as requests-for-identification were tabulated. Table III presents classes of these utterance, along with an example of each class. The utterance forms are divided into four major groups, to be explained below.One class of utterances comprising 7% of identification requests, called "supplemental NP" (e .g., "Put that on the opening in the other large tube. with the round top"), was unreliably coded not c--6~-side~-6d for the analyses below. Category labels followed by "(?) " indicate that the utterances comprising those categories might also have been issued with rising intonation. Notice that in Telephone mode identification requests are never performed directly. No speaker used the paradigmatic direct forms, e.g. "Find the rubber ring shaped like an O", which occurred frequently in the written modality. However, the use of indirection is selective --Telephone experts frequently use direct imperatives to perform assembly requests. Only the identification-request seems to be affected by modality.Many of the utterance forms can be analyzed as requests for identification once an act for physically searching for the referent of a description has been posited (Cohen, 1981) . Finally, the means for performing the act will be some procedural combination of sensory actions (e.g., looking) and counting. The exact combination will depend on the description used. The utterances in Group A can then be analyzed as requests for IDENTIFY-REFERENT using Perrault and Allen' s (1980) method of applying plan recognition to the definition of communicative acts.A. Action-based Utterances Case 1 ("There is a NP") can be interpreted as a request that the hearer IDENTIFY-REFERENT of NP by reasoning that a speaker's informing a hearer that a precondition to an action is true can cause the hearer to believe the speaker wants that action to be performed. All utterances that communicate the speaker's desire that the hearer do some action are labelled as requests.Using only rules about action, Perrault and Allen's method can also explain why Cases 2, 3, and 4 all convey requests for referent identification. Case 2 is handled by an inference saying that if a speaker communicates that an act will yield some desired effect, then one can infer the speaker wants that act performed to achieve that effect. Case 3 is an example of questioning a desired effect of an act (e.g., "Is the garbage out?") to convey that the act itself is desired. Case 4 is similar to Case 2, except the relationship between the desired effect and some action yielding that effect is presumed.In all these cases, ACT = LOOK-AT, and EFFECT = "HEARER SEE X". Since LOOK-AT is part of the "body" (Allen, 1979) of IDENTIFY-REFERENT, Allen's "body-action" inference will make the necessary connection, by inferring that the speaker wanted the hearer to LOOK-AT something as part of his IDENTIFY-REFEPdR~T act.Group B utterances constitute the class of fragments classified as requests for identification.Notice that "fragment" is not a simple syntactic classification.In Case 2, the speaker peralinguistically "calls for" a hearer response in the course Of some linguistically complete utterance. Such examples of parallel achievement of communicative actions cannot be accounted for by any linguistic theory or computational linguistic mechanism of which ~ are aware. These cases have been included here since we believe the theory should be extended to handle them by reasoning about parallel actions.A potential source of inspiration for such a theory would be research on reasoning about concurrent programs.Case 1 includes NP fragments, usually with rising intonation. The action to be performed is not explicitly stated, but must be supplied on the basis of shared knowledge about the discourse situation --who can do what, who can see what, what each participant thinks the other believes, what is expected, etc. Such knowledge will be needed to differentiate the intentions behind a traveller's saying "the 3:15 train to Montreal?" to an information booth clerk (who is not intended to turn around and find the train), from those behind the uttering of "the smallest of the red pieces?", where the hearer is expected to physically identify the piece.to the theory, the speaker ' s intentions conveyed by the elliptical question include i) the speaker's wanting to know whether some relevant property holds of the referent of the description, and 2) the speaker's perhaps wanting that property to hold. Allen and Perrault (1980) suggest that properties needed to "fill in" such fragments come from shared expectations (not just from prior syntactic forms, as is current practice in computational linguistics) .The property in question in our domain is IDENTIFIED-REFERENT(HEARER, NP), which is (somehow) derived from the nature of the task as one of manual assembly. Thus, expectations have suggested a starting point for an inference chain --it is shared knowledge that the speaker wants to know whether IDENTIFIED-REFERENT(~, NP). In the same way that questioning the completion of an action can convey a request for action, questioning IDENTIFIED-REFERENT conveys a request for IDENTIFY-REFERENT (see Case 3, Group A, above) .Thus, ~ our positing an IDENTIFY-REFERENT act, and by assuming such an act is expected of the user, the inferential machinery can derive the appropriate intention behind the use of a noun phrase fragment.The theory should account for 48% of the identification requests in our corpus, and should be extended to account for an additional 6%. The next group of utterances cannot now, and perhaps should not, be handled by a theory of communication based on reasoning about action.Group C utterances (as well as Group A, cases i, 2, and 4) can be interpreted as requests for identification by a rule stipulated by Labor and Fanshel (1977) --if a speaker ostensibly informs a hearer about a state-of-affairs for which it is shared knowledge that the hearer has better evidence, then the speaker is actually requesting confirmation of that state-of-affairs. In Telephone (and Teletype) modality, it is shared knowledge that the hearer has the best evidence for what she "has", how the pieces are arranged, etc. ~hen the apprentice receives a Group C utterance, she confirms its truth perceptually (rather than by proving a theorem), and thereby identifies the referents of the NP's in the utterance.The indirect request for confirmation rule accounts for 66% of the identification request utterances (overlapping with Group A for 35%). This important rule cannot be explained in the theory. It seems to derive more from properties of evidence for belief than it does from a theory of action. As such, it can only be stipulated to a rule-based inference mechanism (Cohen, 1979), rather than be derived from more basic principles.Group D utterance forms are the closest forms to direct requests for identification that appeared, though strictly speaking, they are not direct requests. Case 1 mentions "Imok on", but does not indicate a search explicitly. The interpretation of this utterance in Perrault and Allen' s scheme would require an additional "body-action" inference to yield a request for identification.Case 2 is literally an informative utterance, though a request could be derived in one step. Importantly, the frequency of these "nearest neighbors" is minimal (3%).The act of requesting referent identification is nearly al~ys performed indirectly in Telephone mode. This being the case, inferential mechanisms are needed for uncovering the speaker's intentions from the variety of forms with which this act is performed.A plan-based theory of communication augmented with a rule for identifying indirect requests for confirmation would account for 79% of the identification requests in our corpus. A hierarchy of communicative acts (including" their propositional content) can be used to organize derived rules for interpreting speaker intent based on utterance form, shared knowledge and shared expectations (Cohen, 1979) . Such a rule-based system could form the basis of a future pragmatics/discourse component for a speech understanding system.These results are similar in soma ways to observations by Ochs and colleagues (Ochs, 1979; Ochs, Schieffelin, and Pratt, 1979) .They note that parent-child and child-child discourse is often comprised of "sequential" constructions -with separate utterances for securing reference and for predicating. They suggest that language development should be regarded as an overlaying of newly-acquired linguistic strategies onto previous ones. Adults will often revert to developmentally early linguistic strategies when they cannot devote the appropriate time/resources to planning their utterances. Thus, Ochs et al. suggest, when competent speakers are communicating while concentrating on a task, one would expect to see separate utterances for reference and predication. This suggestion is certainly backed by our corpus, and is important for computational linguistics since, to be sure, our systems are intended to be used in soma task.It is also suggested that the presence of sequential constructions is tied to the possibilities for preplanning an utterance, and hence oral and written discourse would differ in this way.Our study upholds this claim for Telephone vs. Teletype, but does not do so for our Written condition in which many requests for identification occur as separate steps. Furthermore, Ochs et al.'s claim does not account for the use of identification requests in Teletype modality after prior referential miscommunication (Cohen, 1981) . Thus, it would seem that sequential constructions can result from (what they term) planned as well as unplanned discourse.It is difficult to compare our results with those of other studies.Chapanis et al. ' s observation that voice modes are faster and wordier than teletype modes certainly holds here. However, their transcripts cannot easily be used to verify our findings since, for the equipment assembly problem, their subjects were given a set of instructions that could be, and often were, read to the listener. Thus, utterance function would often be predetermined. Our subjects had to remember the task and compose the instructions afresh.(1977) study also cannot be directly compared for the phenomena of interest here since the core dialogues that were analyzed in depth employed a "mixed" communication modality in which the expert communicated with a third party by teletype.The third party, located in the same room as the apprentice, vocally transnitted the expert's communication to the apprentice, and typed the apprentice's vocal response to the expert.The findings of finer-grained and indirect vocal requests would not appear under these conditions. Thompson's (1980) extensive tabulation of utterance forms in a multiple modality comparison overlaps our analysis at the level of syntax. Both Thompson's and the present study are primarily concerned with extending the habitability of current systems by identifying phenomena that people use but which would be problematic for machines. However, our two studies proceeded along different lines. Thompson's was more concerned with utterance forms and less with pragmatic function, whereas for this study, the concerns are reversed in priority. Our priority stems from the observation that differences in utterance function will influence the processing of the same utterance form. However, the present findings cannot be said to contradict Thompson's (nor vice-verse) .Each corpus could perhaps be used to verify the findings in the other.Spoken and teletype discourse, even used for the same ends, differ in structure and in form. Telephone conversation about object assembly is dominated by explicit requests to find objects satisfying descriptions. However, these requests are never performed directly.Techniques for interpreting "indirect speech acts" thus may become crucial for speech understanding systems.These findings must be interpreted with two cautionary notes. First, the request-for-identification category is specific to discourse situations in which the topics of conversation include objects physically present to the hearer. Though the same surface forms might be used, if the conversation is not about manipulating concrete objects, different pragmatic inferences could be made.Secondly, the indirection results may occur only in conversations between humans.It is possible that people do not wish to verbally instruct others with fine-grained imperatives for fear of sounding condescending. Print may remove such inhibitions, as may talking to a machine. This is a question that cannot be settled until good speech understanding systems have been developed. We conjecture that the better the system, the more likely it will be to receive fine-grained indirect requests. It appears to us preferable to err on the side of accepting people's natural forms of speech than to force the user to think about the phrasing of utterances, at the expense of concentrating on the problem. | null | null | null | null | Main paper:
i. introduction:
If a machine could listen, how would we talk to it? Tnis question will be hard to answer definitively until a good mechanical listener is developed.As a next best approximation, this paper presents results of an exploration of how people talk to one another in a domain for which keyboard-based natural language dialogue systems would be desirable, and have already been built (Robinson et al., 1980; Winograd, 1972) .Our observations are based on transcripts of person-to-person telephone-mediated and teletype-mediated dialogues. In these transcripts, one specific kind of communicative act dominates spoken task-related discourse, but is nearly absent from keyboard discourse. Importantly, when this act is performed vocally it is never performed directly. Since most of the utterances in these simple dialogues do not signal the speaker's intent, techniques for inferring intent will be crucial for engaging in spoken task-related discourse. The paper suggests how a plan-based theory of communication (Cohen and Perrault, 1979; Perrault and Allen, 1980) can uncover the intentions underlying the use of various forms.research was supported by the National Institute of Education under contract US-NIE-C-400-76-0116 to the Center for the Study of Reading of the University of Illinois and Bolt, Beranek and Newman, Inc.Motivated by Rubin's (1980) taxonomy of language experiences and influenced by Chapanis et al.'s (1972 Chapanis et al.'s ( , 1977 and Grosz' (1977) communication mode and task-oriented dialogue studies, we conducted an exploratory study to investigate how the structure of instruction-giving discourse depends on the communication situation in which it takes place.Twenty-five subjects ("experts") each instructed a randomly chosen "apprentice" in assembling a toy water pump.All subjects were paid volunteer students from the Lhiversity of Illinois. Five "dialogues" took place in each of the following modalities: face-to-face, via telephone, teletype ("linked" CRT' s) , (non-interactive) audiotape, and (non-interactive) written.In all modes, the apprentices were videotaped as they followed the experts ' instructions.Telephone and Teletype dialogues were analyzed first since results would have implications for the design of speech understanding and production systems.Each expert participated in the experiment on two consecutive days, the first for training and the second for instructing an apprentice. Subjects playing the expert role ware trained by: following a set of assembly directions consisting entirely of imperatives, assembling the pump as often as desired, and then instructing a research assistant.This practice session took place face-to-face. Experts knew the research assistant already knew how to assemble the pump. Experts were given an initial statement of the purpose of the experiment, which indicated that communication would take place in one of a n~ber of different modes, but were not informed of which modality they would communicate in until the next day.In both modes, experts and apprentices were located in different rooms.Experts had a set of pump parts that, they were told, were not to be assembled but could be manipulated. In Telephone mode, experts communicated via a standard telephone and apprentices communicated through a speaker-phone, which did not need to be held and which allowed simultaneous two-way communication. Distortion of the expert's voice was apparent, but not measured. Subjects in "Teletype" (TTY) mode typed their co~mnunication on Elite Datamedia 1500 CRT terminals connected by the Telenet computer network to a computer at Bolt, Beranek and Newman, Inc. The terminals were "linked" so that whatever was typed on one would appear on the other. Simultaneous typing was possible and did occur• Subjects were informed that their typing would not appear simultaneously on either terminal. Response times averaged 1 to 2 seconds, with occasionally longer delays due to system load.The following are representative fragments of Telephone and Teletype discourse. "fit the blue cap over the tube end done put the little black ring into the large blue cap with the hiole in it... ok put the pink valve on the twD pegs in that blue cap...Communication in Telephone mode has a distinct pattern of "find thex" "put it into/onto/over the y", in which reference and predication are addressed in different steps.To relate these steps, more reliance is placed on strategies for signalling dialogue coherence, such as the use of pronouns.Teletype communication involves primarily the use of imperatives such as "put the x Into/onto/around the y".Typically, the first time each object (X) is mentioned in a TrY discourse is within a request for a physical action.This research aims to develop an adequate method for conducting discourse analysis that will be useful to the computational linguist. The method used here integrates psychological, linguistic, and formal approaches in order to characterize language use. Psychological methods are needed in setting up protocols that do not bias the interesting variables.Linguistic methods are needed for developing a scheme for describing the progress of a discourse. Finally, formal methods are essential for stating theories of utterance interpretation in context.To be more specific, we are ultimately interested in similarities and differences in utterance processing across modes, Utterance processing clearly depends on utterance form and the speaker ' s intent. The utterances in the transcripts are therefore categorized by the intentions they are used to achieve. Both utterances and categorizations become data for cross-modal measures as well as for formal methods. Once intentions differing across modes are isolated, our strategy is to then examine the utterance forms used to achieve those intentions. Thus, utterance forms are not compared directly across modes; only utterances used to achieve the same goals are compared, and it is those goals that are expected to vary across modes. With form and function identified, one can then proceed to discuss how utterance processing may differ from one mode to another.Our plan-based theory of speech acts will be used to explain how an utterance's intent coding can be derived from the utterance's form and the prior interaction.A computational model of intent recognition in dialogue (Al~en, 1979; Cohen, 1979; Sidner et al., 1981) can then be used to mimic the theory's assignment of intent. Thus, the theory of speech act interpretation will describe language use in a fashion analogous to the way that a generative grammar describes how a particular deep structure can underlie a given surface structure.The first stage of discourse analysis involved the coding of the conm~unicator's intent in making various utterances• Since attributions of intent are hard to make reliably, care was taken to avoid biasing the results. Following the experiences of Sinclair and Coulthard (1975) , Dote et al. (1978) and Mann et al. (1975) , a coding scheme was developed and two people trained in its use.The coders relied both on written transcripts and on videotapes of the apprentices' assembly.The scheme, which was tested and revised on pilot data until reliability was attained, included a set of approximately 20 "speech act" categories that ware used to label intent, and a set of "operators" and propositions that were used to describe the assembly task, as in (Sacerdoti, 1975) .The operators and propositions often served as the propositional content of the communicative acts. In addition to the domain actions, pilot data led us to include an action of "physically identifying the referent of a description" as part of the scheme (Cohen, 1981) . This action will be seen to be requested explicitly by Telephone experts, but not by experts in Teletype mode.Of course, a coding scheme must not only capture the domain of discourse, it must be tailored to the nature of discourse per se. Many theorists have observed that a speaker can use a ntmber of utterances to achieve a goal, and can use one utterance to achieve a number of goals. Correspondingly, the coders could consider utterances as jointly achieving one intention (by "bracketing" them), could place an utterance in multiple categories, and could attribute more than one intention to the same utterance or utterance part.It was discovered that the physical layout of a transcript, particularly the location of line breaks, affected which utterances were coded.To ensure uniformity, each coder first divided each transcript into utterances that he or she would code.These joint "bracketings" were compared by a third party to yield a base set of codable (sic) utterance parts. The coders could later bracket utterances differently if necessary.The first attempt to code the transcripts was overly ambitious --coders could not keep 20 categories and their definitions in mind, even with a written coding manual for reference. Our scheme was then scaled back --only utterances fitting the following categories were considered:Requests-for-assembly-actions (RAACT) (e.g., "put that on the hole".)Requests-for-orientation-actions (RORT) (e.g., "the other way around", "the top is the bottom". )Requests-to-pick-up (RPUP) (e.g., "take the blue base".)Requests-for-identification (RID) (e.g., "there is a little yellow rubber".) piece oRequests-for-other (ROTH) (e.g., requests for repetition, requests to stop, etc.)Inform-completion(action) (e.g., "OK", "yeah", "got it".)Label (e.g., "that's a plunger")Interrater reliabilities for each category (within each mode), measured as the nunber of agreements X 2 divided by the ntmber of times that category was coded, ware high (above 90%).Since each disagreement counted twice (against both categories that ware coded), agreements also counted twice.Since most of each dialogue consisted of the making of requests, the first analysis examined the frequency of the various kinds of requests in the corpus of five transcripts for each modality. 's (1972, 1977) finding that voice modes were about "twice as wordy" as non-voice modes. Here, there are approximately twice as many requests in Telephone mode as Teletype.Chapenis et al. examined how linguistic behavior differed across modes in terms of measures of sentence length, message length, ntm~ber of words, sentences, messages, etc.contrast, the present study provides evidence of how these modes differ in utterance function.Identification requests are much more frequent in Telephone dialogues than in Teletype conversations.In fact, they constitute the largest category of requests--fully 35%.Since utterances in the RORT, RPUP, and ROTH categories will often be issued to clarify or follow up on a previous request, it is not surprising they would increase in number (though not percentage) with the increase in RID usage. Furthermore, it is sensible that there are about the same number of requests for assembly actions (and hence half the percentage) in each mode since the same "assembly wDrk" is accomplished. ~t~rufore, identification requests seem to be the primary request differentiating the two modalities. However, frequency data include mistakes, dialogue repairs, and repetition. Perhaps identification requests occur primarily after referential misco~unication (as occurs for teletype dialogues (Cohen, 1981)). One might then argue that people would speak more carefully to machines and thus would not need to use identification requests frequently. Alternatively, the use of such requests as a step in a Telephone speaker's plan may truly be a strategy of engaging in spoken task-related discourse that is not found in TI~ discourse.To explore when identification requests were used, a second analysis of the utterance codings was undertaken that was limited to "first time" identifications.Each time a novice (rightly or wrongly) first identified a piece, the communicative act that caused him/her to do so was indicated. However, a coding was counted only if that speech act was not jointly present with another prior to the novice's part identification attempt. Table II indicates the results for each subject in Telephone and Teletype modes. 1 9 2 1 2 1 i0 1 3 ii 1 0 4 9 1 0 5 i0 0 0 RID RPUP RAACT 1 2 9 0 2 9 1 2 9 0 6 3 2 6 4 Subjects were classifed as habitual users of a communicative act if, out of 12 pieces, the subject "introduced" at least 9 of the pieces with that act. In Telephone mode, four of five experts were habitual users of identification requests to get the apprentice to find a piece. In Teletype mode, no experts were habitual users of that act.To show a "modality effect" in the use of the identification request strategy, the ntmber of habitual users of RID in each mode were subjected to the Fischer's exact probability test (hypergeometric).Even with 5 subjects per mode, the differences across modes are significant (p = 0.023), indicating that Telephone conversation per se differs from Teletype conversation in the ways in which a speaker will make first reference to an object.ThUS far, explicit identification requests have been shown to be pervasive in Telephone mode and to constitute a frequently used strategy. One might expect that, in analogous circumstances, a machine might be confronted with many of these acts.Computational linguistics research then must discover means by which a machine can determine the appropriate response as a function, in part, of the form of the utterance. To see just which forms are used for our task, utterances classified as requests-for-identification were tabulated. Table III presents classes of these utterance, along with an example of each class. The utterance forms are divided into four major groups, to be explained below.One class of utterances comprising 7% of identification requests, called "supplemental NP" (e .g., "Put that on the opening in the other large tube. with the round top"), was unreliably coded not c--6~-side~-6d for the analyses below. Category labels followed by "(?) " indicate that the utterances comprising those categories might also have been issued with rising intonation. Notice that in Telephone mode identification requests are never performed directly. No speaker used the paradigmatic direct forms, e.g. "Find the rubber ring shaped like an O", which occurred frequently in the written modality. However, the use of indirection is selective --Telephone experts frequently use direct imperatives to perform assembly requests. Only the identification-request seems to be affected by modality.Many of the utterance forms can be analyzed as requests for identification once an act for physically searching for the referent of a description has been posited (Cohen, 1981) . Finally, the means for performing the act will be some procedural combination of sensory actions (e.g., looking) and counting. The exact combination will depend on the description used. The utterances in Group A can then be analyzed as requests for IDENTIFY-REFERENT using Perrault and Allen' s (1980) method of applying plan recognition to the definition of communicative acts.A. Action-based Utterances Case 1 ("There is a NP") can be interpreted as a request that the hearer IDENTIFY-REFERENT of NP by reasoning that a speaker's informing a hearer that a precondition to an action is true can cause the hearer to believe the speaker wants that action to be performed. All utterances that communicate the speaker's desire that the hearer do some action are labelled as requests.Using only rules about action, Perrault and Allen's method can also explain why Cases 2, 3, and 4 all convey requests for referent identification. Case 2 is handled by an inference saying that if a speaker communicates that an act will yield some desired effect, then one can infer the speaker wants that act performed to achieve that effect. Case 3 is an example of questioning a desired effect of an act (e.g., "Is the garbage out?") to convey that the act itself is desired. Case 4 is similar to Case 2, except the relationship between the desired effect and some action yielding that effect is presumed.In all these cases, ACT = LOOK-AT, and EFFECT = "HEARER SEE X". Since LOOK-AT is part of the "body" (Allen, 1979) of IDENTIFY-REFERENT, Allen's "body-action" inference will make the necessary connection, by inferring that the speaker wanted the hearer to LOOK-AT something as part of his IDENTIFY-REFEPdR~T act.Group B utterances constitute the class of fragments classified as requests for identification.Notice that "fragment" is not a simple syntactic classification.In Case 2, the speaker peralinguistically "calls for" a hearer response in the course Of some linguistically complete utterance. Such examples of parallel achievement of communicative actions cannot be accounted for by any linguistic theory or computational linguistic mechanism of which ~ are aware. These cases have been included here since we believe the theory should be extended to handle them by reasoning about parallel actions.A potential source of inspiration for such a theory would be research on reasoning about concurrent programs.Case 1 includes NP fragments, usually with rising intonation. The action to be performed is not explicitly stated, but must be supplied on the basis of shared knowledge about the discourse situation --who can do what, who can see what, what each participant thinks the other believes, what is expected, etc. Such knowledge will be needed to differentiate the intentions behind a traveller's saying "the 3:15 train to Montreal?" to an information booth clerk (who is not intended to turn around and find the train), from those behind the uttering of "the smallest of the red pieces?", where the hearer is expected to physically identify the piece.to the theory, the speaker ' s intentions conveyed by the elliptical question include i) the speaker's wanting to know whether some relevant property holds of the referent of the description, and 2) the speaker's perhaps wanting that property to hold. Allen and Perrault (1980) suggest that properties needed to "fill in" such fragments come from shared expectations (not just from prior syntactic forms, as is current practice in computational linguistics) .The property in question in our domain is IDENTIFIED-REFERENT(HEARER, NP), which is (somehow) derived from the nature of the task as one of manual assembly. Thus, expectations have suggested a starting point for an inference chain --it is shared knowledge that the speaker wants to know whether IDENTIFIED-REFERENT(~, NP). In the same way that questioning the completion of an action can convey a request for action, questioning IDENTIFIED-REFERENT conveys a request for IDENTIFY-REFERENT (see Case 3, Group A, above) .Thus, ~ our positing an IDENTIFY-REFERENT act, and by assuming such an act is expected of the user, the inferential machinery can derive the appropriate intention behind the use of a noun phrase fragment.The theory should account for 48% of the identification requests in our corpus, and should be extended to account for an additional 6%. The next group of utterances cannot now, and perhaps should not, be handled by a theory of communication based on reasoning about action.Group C utterances (as well as Group A, cases i, 2, and 4) can be interpreted as requests for identification by a rule stipulated by Labor and Fanshel (1977) --if a speaker ostensibly informs a hearer about a state-of-affairs for which it is shared knowledge that the hearer has better evidence, then the speaker is actually requesting confirmation of that state-of-affairs. In Telephone (and Teletype) modality, it is shared knowledge that the hearer has the best evidence for what she "has", how the pieces are arranged, etc. ~hen the apprentice receives a Group C utterance, she confirms its truth perceptually (rather than by proving a theorem), and thereby identifies the referents of the NP's in the utterance.The indirect request for confirmation rule accounts for 66% of the identification request utterances (overlapping with Group A for 35%). This important rule cannot be explained in the theory. It seems to derive more from properties of evidence for belief than it does from a theory of action. As such, it can only be stipulated to a rule-based inference mechanism (Cohen, 1979), rather than be derived from more basic principles.Group D utterance forms are the closest forms to direct requests for identification that appeared, though strictly speaking, they are not direct requests. Case 1 mentions "Imok on", but does not indicate a search explicitly. The interpretation of this utterance in Perrault and Allen' s scheme would require an additional "body-action" inference to yield a request for identification.Case 2 is literally an informative utterance, though a request could be derived in one step. Importantly, the frequency of these "nearest neighbors" is minimal (3%).The act of requesting referent identification is nearly al~ys performed indirectly in Telephone mode. This being the case, inferential mechanisms are needed for uncovering the speaker's intentions from the variety of forms with which this act is performed.A plan-based theory of communication augmented with a rule for identifying indirect requests for confirmation would account for 79% of the identification requests in our corpus. A hierarchy of communicative acts (including" their propositional content) can be used to organize derived rules for interpreting speaker intent based on utterance form, shared knowledge and shared expectations (Cohen, 1979) . Such a rule-based system could form the basis of a future pragmatics/discourse component for a speech understanding system.These results are similar in soma ways to observations by Ochs and colleagues (Ochs, 1979; Ochs, Schieffelin, and Pratt, 1979) .They note that parent-child and child-child discourse is often comprised of "sequential" constructions -with separate utterances for securing reference and for predicating. They suggest that language development should be regarded as an overlaying of newly-acquired linguistic strategies onto previous ones. Adults will often revert to developmentally early linguistic strategies when they cannot devote the appropriate time/resources to planning their utterances. Thus, Ochs et al. suggest, when competent speakers are communicating while concentrating on a task, one would expect to see separate utterances for reference and predication. This suggestion is certainly backed by our corpus, and is important for computational linguistics since, to be sure, our systems are intended to be used in soma task.It is also suggested that the presence of sequential constructions is tied to the possibilities for preplanning an utterance, and hence oral and written discourse would differ in this way.Our study upholds this claim for Telephone vs. Teletype, but does not do so for our Written condition in which many requests for identification occur as separate steps. Furthermore, Ochs et al.'s claim does not account for the use of identification requests in Teletype modality after prior referential miscommunication (Cohen, 1981) . Thus, it would seem that sequential constructions can result from (what they term) planned as well as unplanned discourse.It is difficult to compare our results with those of other studies.Chapanis et al. ' s observation that voice modes are faster and wordier than teletype modes certainly holds here. However, their transcripts cannot easily be used to verify our findings since, for the equipment assembly problem, their subjects were given a set of instructions that could be, and often were, read to the listener. Thus, utterance function would often be predetermined. Our subjects had to remember the task and compose the instructions afresh.(1977) study also cannot be directly compared for the phenomena of interest here since the core dialogues that were analyzed in depth employed a "mixed" communication modality in which the expert communicated with a third party by teletype.The third party, located in the same room as the apprentice, vocally transnitted the expert's communication to the apprentice, and typed the apprentice's vocal response to the expert.The findings of finer-grained and indirect vocal requests would not appear under these conditions. Thompson's (1980) extensive tabulation of utterance forms in a multiple modality comparison overlaps our analysis at the level of syntax. Both Thompson's and the present study are primarily concerned with extending the habitability of current systems by identifying phenomena that people use but which would be problematic for machines. However, our two studies proceeded along different lines. Thompson's was more concerned with utterance forms and less with pragmatic function, whereas for this study, the concerns are reversed in priority. Our priority stems from the observation that differences in utterance function will influence the processing of the same utterance form. However, the present findings cannot be said to contradict Thompson's (nor vice-verse) .Each corpus could perhaps be used to verify the findings in the other.Spoken and teletype discourse, even used for the same ends, differ in structure and in form. Telephone conversation about object assembly is dominated by explicit requests to find objects satisfying descriptions. However, these requests are never performed directly.Techniques for interpreting "indirect speech acts" thus may become crucial for speech understanding systems.These findings must be interpreted with two cautionary notes. First, the request-for-identification category is specific to discourse situations in which the topics of conversation include objects physically present to the hearer. Though the same surface forms might be used, if the conversation is not about manipulating concrete objects, different pragmatic inferences could be made.Secondly, the indirection results may occur only in conversations between humans.It is possible that people do not wish to verbally instruct others with fine-grained imperatives for fear of sounding condescending. Print may remove such inhibitions, as may talking to a machine. This is a question that cannot be settled until good speech understanding systems have been developed. We conjecture that the better the system, the more likely it will be to receive fine-grained indirect requests. It appears to us preferable to err on the side of accepting people's natural forms of speech than to force the user to think about the phrasing of utterances, at the expense of concentrating on the problem.
Appendix:
| null | null | null | null | {
"paperhash": [
"cohen|the_need_for_referent_identification_as_a_planned_action",
"thompson|linguistic_analysis_of_natural_language_communication_with_computers",
"perrault|a_plan-based_analysis_of_indirect_speech_act",
"sidner|research_in_knowledge_representation_for_natural_language_understanding",
"cohen|elements_of_a_plan-based_theory_of_speech_acts",
"chapanis|studies_in_interactive_communication:_ii._the_effects_of_four_communication_modes_on_the_linguistic_performance_of_teams_during_cooperative_problem_solving",
"mann|observation_methods_for_human_dialogue.",
"chapanis|studies_in_interactive_communication:_i._the_effects_of_four_communication_modes_on_the_behavior_of_teams_during_cooperative_problem-solving",
"grosz|the_representation_and_use_of_focus_in_dialogue_understanding."
],
"title": [
"The Need for Referent Identification as a Planned Action",
"Linguistic Analysis of Natural Language Communication With Computers",
"A Plan-Based Analysis of Indirect Speech Act",
"Research in Knowledge Representation for Natural Language Understanding",
"Elements of a Plan-Based Theory of Speech Acts",
"Studies in Interactive Communication: II. The Effects of Four Communication Modes on the Linguistic Performance of Teams during Cooperative Problem Solving",
"Observation Methods for Human Dialogue.",
"Studies in Interactive Communication: I. The Effects of Four Communication Modes on the Behavior of Teams During Cooperative Problem-Solving",
"The representation and use of focus in dialogue understanding."
],
"abstract": [
"The paper presents evidence that speakers often attempt to get hearers to identify referents as a separate step in the speaker's plan. Many of the communicative acts performed in service of such referent identification steps can be analyzed by extending a plan-based theory of communication for task-oriented dialogues to include an action representing a hearer's identifying the referent of a description -- an action that is reasoned about in speakers' and hearers' plans. The phenomenon of addressing referent identification as a separate goal is shown to distinguish telephone from teletype task-oriented dialogues and thus has implications for the design of speech-understanding systems.",
"Interaction with computers in natural \nlanguage requires a language that is flexible \nand suited to the task. This study of natural \ndialogue was undertaken to reveal those characteristics \nwhich can make computer English more \nnatural. Experiments were made in three modes \nof communication: face-to-face, terminal-to-terminal \nand human-to-computer, involving over \n80 subjects, over 80,000 words and over 50 \nhours. They showed some striking similarities, \nespecially in sentence length and proportion of \nwords in sentences. The three modes also share \nthe use of fragments, typical of dialogue. \nDetailed statistical analysis and comparisons \nare given. The nature and relative frequency of \nfragments, which have been classified into \ntwelve categories, is shown in all modes. Special \ncharacteristics of the face-to-face mode \nare due largely to these fragments (which \ninclude phatics employed to keep the channel of \ncommunication open). Special characteristics of \nthe computational mode include other fragments, \nnamely definitions, which are absent from other \nmodes. Inclusion of fragments in computational \ngrammar is considered a major factor in improving \ncomputer naturalness. \n \nThe majority of experiments involved a real \nlife task of loading Navy cargo ships. The \npeculiarities of face-to-face mode were similar \nin this task to results of earlier experiments \ninvolving another task. It was found that in \ntask oriented situations the syntax of interactions \nis influenced in all modes by this context \nin the direction of simplification, resulting in \nshort sentences (about 7 words long). Users \nseek to maximize efficiency In solving the problem. \nWhen given a chance, in the computational \nmode, to utilize special devices facilitating \nthe solution of the problem, they all resort to \nthem. \n \nAnalyses of the special characteristics of \nthe computational mode, including the analysis \nof the subjects\" errors, provide guidance for \nthe improvement of the habitability of such systems. \nThe availability of the REL System, a \nhigh performance natural language system, made \nthe experiments possible and meaningful. The \nindicated improvements in habitability are now \nbeing embodied in the POL (Problem Oriented \nLanguage) System, a successor to REL.",
"We propose an account of indirect forms of speech acts to request and inform based on the hypothesis that language users can recognize actions being performed by others, infer goals being sought, and cooperate in their achievement. This cooperative behaviour is independently motivated and may or may not be intended by speakers. If the hearer believes it is intended, he or she can recognize the speech act as indirect; otherwise it is interpreted directly. Heuristics are suggested to decide among the interpretations.",
"Abstract : This report summarizes the research of BBN's ARPA-sponsored Knowledge Representation for Natural Language Understanding project during its fourth year. In it we report on advances, both in theory and implementation, in the areas of knowledge representation, natural language understanding, and abstract parallel machines. In particular, we report on theoretical advances in the knowledge representation system KL-ONE, extensions to the KL-ONE system, and new uses of KL-ONE in the domain of knowledge about graphic displays. We report on a design for a new prototype natural language understanding system, on issues in cascaded architectures for interaction among the components of a language system, and on a module for Lexical acquisition. In addition, we examine three topics in discourse: a new model of speaker meaning, which extends our previous work on speakers' intentions, an investigation of reference planning and identification, and a theory of 'one'-anaphora interpretation. Our discussion of abstract parallel machines reports on a class of algorithms that approximate Quillian's (49) ideas on the function of human memory. (Author)",
"This paper explores the truism that people think about what they say. It proposes hat, to satisfy their own goals, people often plan their speech acts to affect their listeners' beliefs, goals, and emotional states. Such language use can be modelled by viewing speech acts as operators in a planning system, thus allowing both physical and speech acts to be integrated into plans. \n \nMethodological issues of how speech acts should be defined in a plan-based theory are illustrated by defining operators for requesting and informing. Plans containing those operators are presented and comparisons are drawn with Searle's formulation. The operators are shown to be inadequate since they cannot be composed to form questions (requests to inform) and multiparty requests (requests to request). By refining the operator definitions and by identifying some of the side effects of requesting, compositional adequacy is achieved. The solution leads to a metatheoretical principle for modelling speech acts as planning operators.",
"Two-man teams solved credible, “real world” problems for which computer assistance has been or could be useful. Conversations were carried on in one of four modes of communication: (1) typewriting, (2) handwriting, (3) voice, and (4) natural, unrestricted communication. Both experienced and inexperienced typists were tested in the typewriting mode. Performance was assessed on three classes of dependent measures: time to solution, behavioral measures of activity, and linguistic measures. Significant differences among the communication modes were found in each of the three classes of dependent variable. This paper is concerned mainly with the results of the linguistic analyses. Linguistic performance was assessed with 182 measures, most of which turned out to be redundant and some of which were useless or meaningless. Those that remain show that although problems can be solved faster in the oral modes than in the hard-copy modes, the oral modes are characterized by many more messages, sentences, words, and unique words; much higher communication rates; but lower type-token ratios. Although a number of significant problem and job role effects were found, there were relatively few significant interactions of modes with these variables. It appears, therefore, that the mode effects hold for both problems and for both job roles assigned to the subjects.",
"Abstract : This report describes progress on a new approach for improving man- machine communication. The goal of the work is to significantly expand and diversify the capabilities of the computer interfaces that people use. The approach is first to design computer processes that can assimilate particular aspects of dialogue between people, then to transfer these processes into man-machine communication. The approach requires that particular aspects of the human ability to communicate be selected and studied in detail. This report describes new methods of data collection developed to meet this need and tells how they will be used. The report focuses on nine phenomena of human dialogue which have been selected from approximately 23 phenomena proposed and explored. For most of the nine, explicit observational instructions are given as well.",
"Two-man teams solved credible, “real-world” problems for which computer assistance has been or could be useful. Conversations were carried on in one of four modes of communication: (1) typewriting, (2) handwriting, (3) voice, and (4) natural, unrestricted communication. Two groups of subjects (experienced and inexperienced typists) were tested in the typewriting mode. Performance was assessed on three classes of dependent measures: time to solution, behavioral measures of activity, and linguistic measures. Significant and meaningful differences among the communication modes were found in each of the three classes of dependent variable. This paper is concerned mainly with the results of the activity analyses. Behavior was recorded in 15 different categories. The analyses of variance yielded 34 statistically significant terms of which 27 were judged to be practically significant as well. When the data were transformed to eliminate heterogeneity, the analyses of variance yielded 35 statistically significant terms of which 26 were judged to be practically significant.",
"Abstract : This report develops a representation of focus of attention thatcircumscribes discourse contexts within a general representation ofknowledge. Focus of attention is essential to any comprehension processbecause what and how a person understands is strongly influenced bywhere his attention is directed at a given moment. To formalize thenotion of focus, the need for and the use of focus mechanisms areconsidered from the standpoint of building a computer system that canparticipate in a natural language dialogue with a ser, Two ranges offocus, global and immediate, are investigated, and representations forincorporating them in a computer system are developed.The global focus in which an utterance is interpreted is determinedby the total discourse and situational setting of the utterance. Itinfluences what is talked about, how different concepts are introduced,and how concepts are referenced. To encode global focuscomputationally, a representation is developed that highlights thoseitems that are relevant at a given place in a dialogue. The underlyingknowledge representation is segmented into subunits, called focusspaces, that contain those items that are in the focus of attention of adialogue participant during a particular part of the dialogue.Mechanisms are required for updating the focus representation,because, as a dialogue progresses, the objects and actions that arerelevant to the conversation, and therefore in the participants' focusof attention, change. Procedures are described for deciding when andhow to shift focus in task-oriented dialogues, i.e., in dialogues inwhich the participants are cooperating in a shared task. Theseprocedures are guided by a representation of the task being performed.The ability to represent focus of attention in a languageunderstanding system results in a new approach to an important problemin discourse comprehension -- the identification of the referents ofdefinite noun phrases."
],
"authors": [
{
"name": [
"Philip R. Cohen"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"B. H. Thompson"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"C. Raymond Perrault",
"James F. Allen"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"C. Sidner",
"M. Bates",
"R. Bobrow",
"R. Brachman",
"Philip R. Cohen",
"David J. Israel",
"B. Webber",
"W. Woods"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"Philip R. Cohen",
"C. Raymond Perrault"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"A. Chapanis",
"R. Parrish",
"Robert B. Ochsman",
"G. D. Weeks"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"W. Mann",
"James A. Moore",
"James A Lewin",
"James H. Carlisle"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"A. Chapanis",
"Robert B. Ochsman",
"R. Parrish",
"G. D. Weeks"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"B. Grosz"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
}
]
}
],
"arxiv_id": [
null,
null,
null,
null,
null,
null,
null,
null,
null
],
"s2_corpus_id": [
"13321126",
"1010309",
"3069430",
"59852936",
"2166355",
"60494569",
"60761300",
"142826325",
"61114426"
],
"intents": [
[
"methodology",
"background"
],
[],
[
"background"
],
[],
[
"methodology",
"background"
],
[],
[
"methodology"
],
[],
[]
],
"isInfluential": [
true,
false,
false,
false,
true,
false,
false,
false,
false
]
} | Problem: The paper aims to investigate the differences in language structure and form between spoken discourse and typewritten dialogue, specifically focusing on the dominant use of explicit requests for object identification in spoken task-related discourse.
Solution: The hypothesis suggests that techniques for interpreting indirect speech acts, particularly requests for object identification, will be crucial for developing speech understanding systems that can accurately process and respond to the finer-grained and indirect nature of spoken language in task-related dialogues. | 512 | 0.015625 | null | null | null | null | null | null | null | null |
486875ce588d9d28b1f097817ebd5bdad7178acf | 15330234 | null | A Knowledge Engineering Approach to Natural Language Understanding | This paper describes the results of a preliminary study of a Knowledge Engineering approach to Natural Language Understanding. A computer system is being developed to handle the acquisition, representation, and use of linguistic knowledge. The computer system is rule-based and utilizes a semantic network for knowledge storage and representation. In order to facilitate the interaction between user and system, input of linguistic knowledge and computer responses are in natural language. Knowledge of various types can be entered and utilized: syntactic and semantic; assertions and rules. The inference tracing facility is also being developed as a part of the rule-based system with output in natural language. A detailed example is presented to illustrate the current capabilities and features of the system. ** 'Y IS A VARIABLE. ** 'ON IS A RELATION. ** 'A IS A G-DETERMINER. ** 'BOTTLE IS A NOUN. ** 'CONTAINER IS A NOUN. ** 'TABLE IS A NOUN. ** 'DESK IS A NOUN. ** 'BAR IS A NOUN. *~ 'FLUID IS A MASS-NOUN. ** 'MATERIAL IS A MASS-NOUN. ** 'MILK IS A MASS-NOUN. ** 'WATER IS A MASS-NOUN. | {
"name": [
"Shapiro, Stuart C. and",
"Neal, Jeannette G."
],
"affiliation": [
null,
null
]
} | null | null | 20th Annual Meeting of the Association for Computational Linguistics | 1982-06-01 | 17 | 15 | null | This paper describes the results of a • preliminary study of a Knowledge Engineering (KE) approach to Natural Language Understanding (NLU). The KE approach to an Artificial Intelligence task involves a close association with an expert in the task domain. This requires making it easy for the expert to add new knowledge to the computer system, to understand what knowledge is in the system, and to understand how the system is accomplishing the task so that needed changes and corrections are easy to recognize and to make. It should be noted that our task domain is NLU. That is, the knowledge in the system is knowledge about NLU and the intended expert is an expert in NLU.The KE system we are using is the SNePS semantic network processing system [ii] . This system inci~ ~es a semantic network system in which ** This work was supported in part by the National Science Foundation under Grants MCS80-06314 and SPI-8019895. all knowledge, including rules, is represented as nodes in a semantic network, an inference system that performs reasoning according to the rules stored in the network, and a tracing package that allows the user to follow the system's reasoning. A major portion of this study involves the design and implementation of a SNePS-based system, called the NL-system, to enable the NLU expert to enter linguistic knowledge into the network in natural language, to have this knowledge available to query and reason about, and to use this knowledge for processing text including additional NLU knowledge. These features distinguish our system from other rule-based natural language processing systems such as that of Pereira and Warren [9] and Robinson [i0] .One of the major concerns of our study is the acquisition of knowledge, both factual assertions and rules of inference.Since both types of knowledge are stored in similar form in the semantic network, our NL-system is being developed with the ability to handle the input of both types of knowledge, with this new knowledge immediately available for use.Our concern with the acquisition of both types of knowledge differ~ from the approach of Haas and Hendrix [i] , who a~e pursuing only the acquisition of large aggregations of individual facts.The benefit of our KE approach may be seen by considering the work of Lehnert [5] . She compiled an extensive list of rules concerning how questions should he answered.For example, when asked, "Do you know what time it is?", one should instead answer the question "What time is it?". Lehnert only implemented and tested some of her rules, and those required a programming effort. If a system like the one being proposed here had been available to her, Lehnert could have tested all her rules with relative ease.Our ultimate goal is a KE system with all its linguistic knowledge as available to the language expert as domain knowledge is in other expert systems. In this preliminary study we explore the feasibility of our approach as implemented in our representations and N-L-system.A major goal of this study is the design and implementation of a user-friendly system for experimentation in KE applied to Natural Language Understanding.The NL-system consists of two logical components: a) A facility for the input of linguistic knowledge into the semantic network in natural language., This linguistic knowledge primarily consists of rules about NLU and a lexicon. The NL-system contains a core of network rules which parse a user's natural language rule and build the corresponding structure in the form of a network rule. This NL-system facility enables the user to manipulate both the syntactic and semantic aspects of surface strings. b) A facility for phrase/sentence generation and question answering via rules in the network.The user can pose a limited number of types of queries to the system in natural language, and the system utilizes rules to parse the query and generate a reply. An inference tracing facility is also being developed which uses this phrase/sentence generation capability. This will enable the user to trace the ~ inference processes, which result from the activation of his rules, in natural language.When a person uses this NL-system for experimentation, there are two task domains coresident in the semantic network.These domains are: (I) The NLU-domain which consists of the collection of propositions and rules concerning Natural Language Understanding, including both the N'L-system core rules and assertions and the userspecified rules and assertions; and (2) the domain of knowledge which the user enters and interacts with via the NLU domain. For this study, a limited '~Bottle Domain" is used as the domain of type (2) . This domain was chosen to let us experiment with the use of semantic knowledge to clarify, during parsing, the way one noun madifies another in a noun-noun construction, viz. "milk bottle" vs. "glass bottle".In a sense, the task domain (2) is a subdomain of the NLU-domain since task domain (2) is built and used via the NLU-domain. However, the two domains interact when, for example, knowledge from both domains is used in understanding a sentence being "read" by the system.The system is dynamic and new knowledge, relevant to either or both domains, can be added at any time.The basic tools that the language expert will need to enter into the system are a lexicon of words and a set of processing rules. This system enables them to be input in natural language.The system initially uses five "undefined terms": L-CAT, S-CAT, L-REL, S-REL, and VARIABLE. L-CAT is a term which represents the category of all lexical categories such as VERB and NOUN. S-CAT represents the category of all string categories such as NOUN PHRASE or VERB PHRASE. L-REL is a term which represents the category of relations between a string and its lexical constituents. Examples of L-RELs might be MOD NOUN and HEAD NOUN (of a NOUN NOUN PHRASE). S-REL represents the category of relations between a string and its sub-string constituents, such as FIRST NP and SECOND NP (to distinguish between two NPs within one sentence). VARIABLE is a term which represents the class of identifiers which the user will use as variables in his natural language rules.Before entering his rules into the system, the user must inform the system of all members of the L-CAT and VARIABLE categories which he will use. Words in the S-CAT, L-REL and S-REL categories are introduced by the context of their use in user-specified rules. The choice of all linguistic names is totally at the discretion of the user.A list of the initial entries for the example of this paper are given below. The single quote mark indicates that the following wordis mentioned rather than used.Throughout this paper, lines beginning with the symbol ** are entered by the user and the following line(s) are the computer response.In response to a declarative input statement, if the system has been able to parse the statement and build a structure in the semantic network to represent the input statement, then the computer replies with an echo of the user's statement prefaced by the phrase "I UNDERSTAND THAT". In other words, the building of a network structure is the system's "representation" of understanding. The core of the NL-system contains a collection of rules which accepts the language defined by the grammar listed in the Appendix. The core is responsible for parsing the user's natural language input statements and building the corresponding network structure.It is also necessary to start with a set of semantic network structures representing the basic relations the system can use for knowledge representation. Currently these relations are: a) Word W is preceded by "connector point" P in a surface string; e.g. node M3 of figure I represents that word IS is preceded by connector point M2 in the string; b9 Lexeme L is a member of category C; e.g. this is used to represent the concept that 'BOTTLE IS A NOUN, which was input in Section 3; c) The string beginning at point Pl and ending at point P2 in a surface string is in category C; e.g. node M53 of figure 3 represents the concept that '~ bottle" is a GNP; d) Item X has the relation R to item Y; e.g. node M75 of figure 1 represents the concept that the class of bottles is a subset of the class of containers; e) A class is characterized by its members participating in some relation; e.g. the class of glass bottles is characterized by its members being made of glass; f) The rule structures of SNePS.The representation of a surface string utilized in this study consists of a network version of the list structure used by Pereira and Warren [I0] which eliminates the explicit "connecting" tags or markers of their alternate representation. This representation is also similar to Kay's charts [4] in that several structures may be built as alternative analyses of a single substring. The network structure built up by our top-level "reading" function, without any of the additional structure that would be added as a result of processing via rules of the network, is illustrated in figure I.As each word of an input string is read by the system, the network representation of the string is extended and relevant rules stored in the SNePS network are triggered. All applicable rules are started in parallel by Processes of our MULTI package [8] , are suspended if not all their antecedents are satisfied, and are resumed if more antecedents are satisfied as the string proceeds. The SNePS bidirectional inference capability [6] focuses attention towards the active parsing processes and cuts down the fan out of pure forward or backward chaining. The system has many of the attributes and benefits of Kaplan's producer-consumer model [3] which influenced the design of the inference system. The two SNePS subsystems, the MULTI inference system and the MATCH subsystem, provide the user with the pattern matching and parse suspension and continuation capability enjoyed by the Flexible Parser of Hayes & Mouradian [2] .After having entered a lexicon into the system as described above, the user will enter his natural language rules. These rules must be in the IF-THEN conditional form. A sample rule that the user might enter is: The words which are underlined in the above rule are terms selected by the user for certain linguistic entities.The lexical category names such as G-DETERMINER and NOUN must be entered previously as discussed above. The words MOD-NOUN and HEAD-NOUN specify lexical constituents of a string and therefore the.system adds them to the L-REL category. The string name NNP is added to the S-CAT category by the system. The user's rule-statement is read by the system and processed by existing rules as described above.When it has been completely analyzed, a translation of the rule-statement is asserted in the form of a network rule structure. This rule is then available to analyze further user inputs.The form of these user rules is determined by the design of our initial core of rules.We could, of course, have written rules which accept user rules of the form NNP ---> G-DETERMINER NOUN NOUN. Notice, however, that most of the user rules of this section contain more information than such simple phrase-structure rules. Figure 2 contains the list of the user natural language rules which are used as input to the NL-system in the example developed for this paper. These rules illustrate the types of rules which the system can handle.By adding the rules of figure 2 to the system, we have enhanced the ability of the NL- ii.** IF THE CHARACTERISTIC OF E IS TO BE MADE OF THE ITEM X * AND Y IS A MEMBER OF E * THEN THE CHARACTERISTIC OF Y IS TO BE MADE OF THE ITEM X. Figure 2 . The rules used as input to the system. system to '%nderstand" surface strings when '~ead" into the network. If we examine rules 1 and 2, for example, we find they define a GNP (a generic noun phrase). Rules 4, 8, and 9 stipulate that a relationship exists between a surface string and the concept or proposition which is its intension. This relationship we denoted by "expresses". When these rules are triggered, they will not only build syntactic information into the network categorizing the particular string that is being "read" in, but will also build a semantic node representing the relationship '~xpresses" between the string and the node representing its intension. Thus, both semantic and syntactic concepts are built and linked in the network.In contrast to rules i -9, rules I0 and II are purely semantic, not syntactic. The user's rules may deal with syntax alone, semantics alone, or a combination of both.All knowledge possessed by the system resides in the same semantic network and, therefore, both the rules of the NL-system core and the user's rules can be triggered if their antecedents are satisfied. Thus the user's rules can be used not "only for the input of surface strings concerning the task domain (2) discussed in Section 2, but also for enhancing the NL-system's capability of '%nderstanding" input information relative to the NLU domain.Assuming that we have entered the lexicon via the statements shown in Section 3 and have entered the rules listed in Section 6, we can input a sentence such as "A bottle is a container". Figure 3 illustrates the network representation of the surface string "A bottle is a container" after having been processed by the user's rules listed in Section 6. Rule 2 would be triggered and would identify "a bottle" and "a container" as GNPs, building nodes M53, M55, M61, and M63 of figure 3. Then the antecedent of rule 7 would be satisfied by the sentence, since it consists of a GNP, namely "a bottle", followed by the word "is", followed by a GNP, namely "a container". Therefore the node Mg0 of figure 3 would be built identifying the sentence as a DGNP-SNTC. The addition of this knowledge would trigger rule 8 and node M75 of figure 3 would be built asserting that the class named "bottle" is a subset of the class named "container". Furthermore, node M91 would be built asserting that the sentence EXPRESSES the above stated subset proposition.Let us now input additional statements to the system.As each sentence is added, node structures are built in the network concerning both the syntactic properties of the sentence and the underlying semantics of the sentence. Each of these structures is built into the system only, however, if it is the consequence of the triggering of one of the expert's rules.We now add three sentences (preceded by the **) and the program response is shown for each. Each of the above input sentences is parsed by the rules of Section 6 identifying the various noun phrases and sentence structures, and a particular semantic subset relationship is built corresponding to each sentence.We can now query the system concerning the information just added and the core rules will process the query. The query is parsed, an answer is deduced from the information now stored in the semantic network, and a reply is generated from the network structure which represents the assertion of the subset relationship built corresponding to each of the above input statements.The next section discusses the question-answering/generation facility in more detail.Now we input the sentence "A milk bottle is on a table". The rules involved are rules 2, 3, 4, 6, 9, and 10.The phrase "a milk bottle" triggers rule 3 which identifies it as a NNP (noun-noun phrase).Then since the string has been identified as an NNP, rule 4 is triggered and a new class is created and the new class is a subset of the class representing bottles.Rule 6 is also triggered by the addition of the instances of the consequents of rules 3 and 4 and by our previous input sentences asserting that "A bottle is a container" and "Milk is a fluid".As a result, additional knowledge is built into the network concerning the new sub-class of bottles: the function of this new class is to contain milk. Then since "a table" satisfies the conditions for rule 2, it is identified as a GNP, rule 9 is finally triggered, and a structure is built into the network representing the concept that a member of the set of bottles for containing milk is on a member of the set of tables. The antecedents of rule i0 are satisfied by this member of the set of bottles for containing milk, and an assertion is added to the effect that the function of this member is also to contain milk. The computer responds "I UNDERSTAND THAT . . ." only when a sructure has been built which the sentence EXPRESSES.In order to further ascertain whether the system has understood the input sentence, we can query the system as follows. The system's core rules again parse the query, deduce the answer, and generate a phrase to express the answer.We now input the sentence '~ glass bottle is on a desk" to be parsed and processed by the rules of Section 6.Processing of this sentence is similar to that of the previous sentence, except that rule 5 will be triggered instead of rule 6 since the system has been informed that glass is a material. Since the string "a glass bottle"is a noun-noun phrase, glass is a subset of material, and bottle is a subset of container, a new class is created which is a subset of bottles and the characteristic of this class is to be made of glass. The remainder of the sentence is processed in the same way as the previous input sentence, until finally a structure is built to represent the proposition that a member of the set of bottles made of glass is on a member of the set of desks. Again, this proposition is linked to the input sentence by an EXPRESSES relation.When we input the sentence (again preceded by the **) to the system, it responds with its conclusion as shown here.To make sure that the system understands the difference between "glass bottle" and "milk bottle", we query the system relative to the item on the desk:We now try "A water bottle is on a bar", but the system cannot fully understand this sentence since it has no knowledge about water.We have not t01d the system whether water is a fluid or a material.Therefore, rules 3 and 4 are triggered and a node is built to represent this new class of bottles, but no assertion is built concerning the properties of these bottles.Since only three of the four antecedents of rule 6 are satisfied, processing of this rule is suspended.Rule 9 is triggered, however, since all of its antecedents are satisfied, and therefore an assertion is built into the network representing the proposition that a member of a subset of bottles is on a member of the class of bars.Thus the system replies that it has understood the input sentence, but really has not fully understood the phrase "a water bottle" as we can see when we query the system. It does not respond that it is "a bottle for containing water".Essentially, the phrase "water bottle" is ambiguous for the system.It might mean '%ottle for containing water", 'bottle made of water", or something else. The system's '~epresentation" of this ambiguity is the suspended rule processing. Meanwhile the parts of the sentence which are "comprehensible" to the system have been processed and stored. After we tell the system '~ater is a fluid", the system resumes its processing of rule 6 and an assertion is established in the network representing the concept that the function of this latest class of bottles is to contain water. The ambiguity is resolved by rule processing being completed in one of the ways which were previously possible.We can then query the system to show its understanding of what type of bottle is on the bar.This example demonstrates two features of the system:I) The combined use of syntactic and semantic information in the processing of surface strings.This feature is one of the primary benefits of having not only syntactic and semantic, but also hybrid rules.2) The use of bi-directional inference to use later information to process or disambiguate earlier strings, even across sentence boundaries.The question-answering/generation facility of the NL-system, mentioned briefly in Section 2, is completely rule-based. When a query such as 'What is a bottle?" is entered into the system, the sentence is parsed by rules of the core in conjunction with user-defined rules. That is, rule 2 of Section 6 would identify "a bottle" as a GNP, but the top level parse of the input string is accomplished by a core rule. The syntax and corresponding semantics designated by rules 7 and 8 of Section 6 form the basis of the core rule. Our current system does not enable the user to specify the syntax and semantics of questions, so the core rules which define the syntax and consequents of a question were coded specifically for the example of this paper, we intend to pursue this issue in the future. Currently, the two types of questions that our system can process are:WHAT IS <NP> ? WHAT IS <RELATION> <NP> ? Upon successful parse of the query, the system engages in a deduction process to determine which set is a superset of the set of bottles. This process can either find an assertion in the network answering the query or, if necessary, the process can utilize bi-directional inference, initiated in backword-chaining mode, to deduce an answer.In this instance, the network structure dominated by node M75 of figure 3 is found as the answer to the query. This structure asserts that the set of bottles is a subset of the set of containers.Another deduction process is now initiated to generate a surface string to express this structure. For the purpose of generation, we have deliberately not used the input strings which caused the semantic network structures to be built. If we had deduced a string which EXPRESSES node M75, the system would simply have found and repeated the sentence represented by node M90 of figure 3. We plan to make use of these surface strings in future work, but for this study, we have employed a second "expresses" relation, which we call EXPRESS-2, and rules of the core to ><lXi)< J Figure 4 . Network representation of a generated surface string.generate surface strings to express, semantic structures. Figure 4 illustrates the network representation of the surface string generated for node M75. The string "A bottle", dominated by node M221, is generated for node M54 of figure 3, expressing an arbitrary member of the set of bottles. The string "a container", dominated by node M223, is generated to express the set of containers, represented by node M62 of figure 3. Finally, the surface string "A bottle is a container", represented by node M226, is established to express node M75 and the answer to the query.In general, a surface sentence is generated to EXPRESS-2 a given semantic structure by first generating strings to EXPRESS-2 the substructures of the semantic structure and by assembling these strings into a network version of a list. Thus the semantic structure is processed in a bottom-up fashion.The structure of the generated string is a phrase-structured representation utilizing FIRST and REST pointers to the sub-phrases of a string. This representation reflects the subordinate relation of a phrase to its "parent"phrase. The structures pointed to by the FIRST and REST arcs can be a) another list structure with FIRST and REST pointers; b) a string represented by a node such as Mg0 of figure 3 with BEG, END, and CAT arcs; or c) a node with WORD arc to a word and an optional PRED arc to another node with PRED and WORD arcs. After the structure representing the surface string has been generated, the resulting list or tree is traversed and the leaf nodes printed as response.Our goal is to design a NLU system for a linguistic theorist to use for language processing. The system's linguistic knowledge should be available to the theorist as domain knowledge. As a result of our preliminary study of a KE approach to Natural Language Understanding, we have gained valuable experience with the basic tools and concepts of such a system. All aspects of our NL-system have, of course, undergone many revisions and refinements during development and will most likely continue to do so.During the course of our study, we have a) developed two representations of a surface string: I) a linear representation appropriate for input strings as shown in figure i; and 2) a phrase-structured representation appropriate for generation, shown in figure 4; b) designed a set of SNePS rules which are capable of analyzing the user's natural language input rules and building the corresponding network rules; c) identified basic concepts essential for linguistic analysis: lexical category, phrase category, relation between a string and lexical constituent, relation between a string and substrimg, the expresses relations between syntactic structures and a semantic structures, and the concept of a variable that the user may wish to use in input rules; d) designed a set of SNePS rules which can analyze some simple queries and generate a response.As our system has evolved, we have striven to reduce the amount of core knowledge which is essential for the system to function.We want to enable the user to define the language processing capabilities of the system~ but a basic core of rules is essential to process the user's initial lexicon entries and rules.One of our high priority items for the immediate future is to pursue this issue. Our objective is to develop the NL-system into a boot-strap system to the greatest degree possible. That is, with a minimal core of pre-programmed knowledge, the user will input rules and assertions to enhance the system's capability to acquire both linguistic and nonlinguistic knowledge. In other words, the user will define his own input language for entering knowledge into the system and conversing with the system.Another topic of future investigation will be the feasibility of extending the user's control over the system's basic tools by enabling the user to define the network Case frames for syntactic and semantic knowledge representation.We also intend to extend the capability of the system so as to enable the user to define the syntax of questions and the nature of response.This study explores the realm of a Knowledge Engineering approach to Natural Language Understanding. A basic core of NL rules enable the NLU expert to input his natural language rules and his lexicon into the semantic network knowledge base in natural lan~uame.In this system, the rules and assertions concerning both semantic and syntactic knowledge are stored in the network and undergo interaction during the deduction processes.An example was presented to illustrate: entry of the user's lexicon into the system; entry of the user's natural language rule statements into the system; the types of rule statements which the user can utilize; how rules build conceptual structures from surface strings; the use of knowledge for disambiguating surface structure; the use of later information for disamhiguating an earlier, partially understood sentence; the question-answering~generation facility of the NL-system. | null | null | null | null | Main paper:
i introduction:
This paper describes the results of a • preliminary study of a Knowledge Engineering (KE) approach to Natural Language Understanding (NLU). The KE approach to an Artificial Intelligence task involves a close association with an expert in the task domain. This requires making it easy for the expert to add new knowledge to the computer system, to understand what knowledge is in the system, and to understand how the system is accomplishing the task so that needed changes and corrections are easy to recognize and to make. It should be noted that our task domain is NLU. That is, the knowledge in the system is knowledge about NLU and the intended expert is an expert in NLU.The KE system we are using is the SNePS semantic network processing system [ii] . This system inci~ ~es a semantic network system in which ** This work was supported in part by the National Science Foundation under Grants MCS80-06314 and SPI-8019895. all knowledge, including rules, is represented as nodes in a semantic network, an inference system that performs reasoning according to the rules stored in the network, and a tracing package that allows the user to follow the system's reasoning. A major portion of this study involves the design and implementation of a SNePS-based system, called the NL-system, to enable the NLU expert to enter linguistic knowledge into the network in natural language, to have this knowledge available to query and reason about, and to use this knowledge for processing text including additional NLU knowledge. These features distinguish our system from other rule-based natural language processing systems such as that of Pereira and Warren [9] and Robinson [i0] .One of the major concerns of our study is the acquisition of knowledge, both factual assertions and rules of inference.Since both types of knowledge are stored in similar form in the semantic network, our NL-system is being developed with the ability to handle the input of both types of knowledge, with this new knowledge immediately available for use.Our concern with the acquisition of both types of knowledge differ~ from the approach of Haas and Hendrix [i] , who a~e pursuing only the acquisition of large aggregations of individual facts.The benefit of our KE approach may be seen by considering the work of Lehnert [5] . She compiled an extensive list of rules concerning how questions should he answered.For example, when asked, "Do you know what time it is?", one should instead answer the question "What time is it?". Lehnert only implemented and tested some of her rules, and those required a programming effort. If a system like the one being proposed here had been available to her, Lehnert could have tested all her rules with relative ease.Our ultimate goal is a KE system with all its linguistic knowledge as available to the language expert as domain knowledge is in other expert systems. In this preliminary study we explore the feasibility of our approach as implemented in our representations and N-L-system.A major goal of this study is the design and implementation of a user-friendly system for experimentation in KE applied to Natural Language Understanding.The NL-system consists of two logical components: a) A facility for the input of linguistic knowledge into the semantic network in natural language., This linguistic knowledge primarily consists of rules about NLU and a lexicon. The NL-system contains a core of network rules which parse a user's natural language rule and build the corresponding structure in the form of a network rule. This NL-system facility enables the user to manipulate both the syntactic and semantic aspects of surface strings. b) A facility for phrase/sentence generation and question answering via rules in the network.The user can pose a limited number of types of queries to the system in natural language, and the system utilizes rules to parse the query and generate a reply. An inference tracing facility is also being developed which uses this phrase/sentence generation capability. This will enable the user to trace the ~ inference processes, which result from the activation of his rules, in natural language.When a person uses this NL-system for experimentation, there are two task domains coresident in the semantic network.These domains are: (I) The NLU-domain which consists of the collection of propositions and rules concerning Natural Language Understanding, including both the N'L-system core rules and assertions and the userspecified rules and assertions; and (2) the domain of knowledge which the user enters and interacts with via the NLU domain. For this study, a limited '~Bottle Domain" is used as the domain of type (2) . This domain was chosen to let us experiment with the use of semantic knowledge to clarify, during parsing, the way one noun madifies another in a noun-noun construction, viz. "milk bottle" vs. "glass bottle".In a sense, the task domain (2) is a subdomain of the NLU-domain since task domain (2) is built and used via the NLU-domain. However, the two domains interact when, for example, knowledge from both domains is used in understanding a sentence being "read" by the system.The system is dynamic and new knowledge, relevant to either or both domains, can be added at any time.The basic tools that the language expert will need to enter into the system are a lexicon of words and a set of processing rules. This system enables them to be input in natural language.The system initially uses five "undefined terms": L-CAT, S-CAT, L-REL, S-REL, and VARIABLE. L-CAT is a term which represents the category of all lexical categories such as VERB and NOUN. S-CAT represents the category of all string categories such as NOUN PHRASE or VERB PHRASE. L-REL is a term which represents the category of relations between a string and its lexical constituents. Examples of L-RELs might be MOD NOUN and HEAD NOUN (of a NOUN NOUN PHRASE). S-REL represents the category of relations between a string and its sub-string constituents, such as FIRST NP and SECOND NP (to distinguish between two NPs within one sentence). VARIABLE is a term which represents the class of identifiers which the user will use as variables in his natural language rules.Before entering his rules into the system, the user must inform the system of all members of the L-CAT and VARIABLE categories which he will use. Words in the S-CAT, L-REL and S-REL categories are introduced by the context of their use in user-specified rules. The choice of all linguistic names is totally at the discretion of the user.A list of the initial entries for the example of this paper are given below. The single quote mark indicates that the following wordis mentioned rather than used.Throughout this paper, lines beginning with the symbol ** are entered by the user and the following line(s) are the computer response.In response to a declarative input statement, if the system has been able to parse the statement and build a structure in the semantic network to represent the input statement, then the computer replies with an echo of the user's statement prefaced by the phrase "I UNDERSTAND THAT". In other words, the building of a network structure is the system's "representation" of understanding. The core of the NL-system contains a collection of rules which accepts the language defined by the grammar listed in the Appendix. The core is responsible for parsing the user's natural language input statements and building the corresponding network structure.It is also necessary to start with a set of semantic network structures representing the basic relations the system can use for knowledge representation. Currently these relations are: a) Word W is preceded by "connector point" P in a surface string; e.g. node M3 of figure I represents that word IS is preceded by connector point M2 in the string; b9 Lexeme L is a member of category C; e.g. this is used to represent the concept that 'BOTTLE IS A NOUN, which was input in Section 3; c) The string beginning at point Pl and ending at point P2 in a surface string is in category C; e.g. node M53 of figure 3 represents the concept that '~ bottle" is a GNP; d) Item X has the relation R to item Y; e.g. node M75 of figure 1 represents the concept that the class of bottles is a subset of the class of containers; e) A class is characterized by its members participating in some relation; e.g. the class of glass bottles is characterized by its members being made of glass; f) The rule structures of SNePS.The representation of a surface string utilized in this study consists of a network version of the list structure used by Pereira and Warren [I0] which eliminates the explicit "connecting" tags or markers of their alternate representation. This representation is also similar to Kay's charts [4] in that several structures may be built as alternative analyses of a single substring. The network structure built up by our top-level "reading" function, without any of the additional structure that would be added as a result of processing via rules of the network, is illustrated in figure I.As each word of an input string is read by the system, the network representation of the string is extended and relevant rules stored in the SNePS network are triggered. All applicable rules are started in parallel by Processes of our MULTI package [8] , are suspended if not all their antecedents are satisfied, and are resumed if more antecedents are satisfied as the string proceeds. The SNePS bidirectional inference capability [6] focuses attention towards the active parsing processes and cuts down the fan out of pure forward or backward chaining. The system has many of the attributes and benefits of Kaplan's producer-consumer model [3] which influenced the design of the inference system. The two SNePS subsystems, the MULTI inference system and the MATCH subsystem, provide the user with the pattern matching and parse suspension and continuation capability enjoyed by the Flexible Parser of Hayes & Mouradian [2] .After having entered a lexicon into the system as described above, the user will enter his natural language rules. These rules must be in the IF-THEN conditional form. A sample rule that the user might enter is: The words which are underlined in the above rule are terms selected by the user for certain linguistic entities.The lexical category names such as G-DETERMINER and NOUN must be entered previously as discussed above. The words MOD-NOUN and HEAD-NOUN specify lexical constituents of a string and therefore the.system adds them to the L-REL category. The string name NNP is added to the S-CAT category by the system. The user's rule-statement is read by the system and processed by existing rules as described above.When it has been completely analyzed, a translation of the rule-statement is asserted in the form of a network rule structure. This rule is then available to analyze further user inputs.The form of these user rules is determined by the design of our initial core of rules.We could, of course, have written rules which accept user rules of the form NNP ---> G-DETERMINER NOUN NOUN. Notice, however, that most of the user rules of this section contain more information than such simple phrase-structure rules. Figure 2 contains the list of the user natural language rules which are used as input to the NL-system in the example developed for this paper. These rules illustrate the types of rules which the system can handle.By adding the rules of figure 2 to the system, we have enhanced the ability of the NL- ii.** IF THE CHARACTERISTIC OF E IS TO BE MADE OF THE ITEM X * AND Y IS A MEMBER OF E * THEN THE CHARACTERISTIC OF Y IS TO BE MADE OF THE ITEM X. Figure 2 . The rules used as input to the system. system to '%nderstand" surface strings when '~ead" into the network. If we examine rules 1 and 2, for example, we find they define a GNP (a generic noun phrase). Rules 4, 8, and 9 stipulate that a relationship exists between a surface string and the concept or proposition which is its intension. This relationship we denoted by "expresses". When these rules are triggered, they will not only build syntactic information into the network categorizing the particular string that is being "read" in, but will also build a semantic node representing the relationship '~xpresses" between the string and the node representing its intension. Thus, both semantic and syntactic concepts are built and linked in the network.In contrast to rules i -9, rules I0 and II are purely semantic, not syntactic. The user's rules may deal with syntax alone, semantics alone, or a combination of both.All knowledge possessed by the system resides in the same semantic network and, therefore, both the rules of the NL-system core and the user's rules can be triggered if their antecedents are satisfied. Thus the user's rules can be used not "only for the input of surface strings concerning the task domain (2) discussed in Section 2, but also for enhancing the NL-system's capability of '%nderstanding" input information relative to the NLU domain.Assuming that we have entered the lexicon via the statements shown in Section 3 and have entered the rules listed in Section 6, we can input a sentence such as "A bottle is a container". Figure 3 illustrates the network representation of the surface string "A bottle is a container" after having been processed by the user's rules listed in Section 6. Rule 2 would be triggered and would identify "a bottle" and "a container" as GNPs, building nodes M53, M55, M61, and M63 of figure 3. Then the antecedent of rule 7 would be satisfied by the sentence, since it consists of a GNP, namely "a bottle", followed by the word "is", followed by a GNP, namely "a container". Therefore the node Mg0 of figure 3 would be built identifying the sentence as a DGNP-SNTC. The addition of this knowledge would trigger rule 8 and node M75 of figure 3 would be built asserting that the class named "bottle" is a subset of the class named "container". Furthermore, node M91 would be built asserting that the sentence EXPRESSES the above stated subset proposition.Let us now input additional statements to the system.As each sentence is added, node structures are built in the network concerning both the syntactic properties of the sentence and the underlying semantics of the sentence. Each of these structures is built into the system only, however, if it is the consequence of the triggering of one of the expert's rules.We now add three sentences (preceded by the **) and the program response is shown for each. Each of the above input sentences is parsed by the rules of Section 6 identifying the various noun phrases and sentence structures, and a particular semantic subset relationship is built corresponding to each sentence.We can now query the system concerning the information just added and the core rules will process the query. The query is parsed, an answer is deduced from the information now stored in the semantic network, and a reply is generated from the network structure which represents the assertion of the subset relationship built corresponding to each of the above input statements.The next section discusses the question-answering/generation facility in more detail.Now we input the sentence "A milk bottle is on a table". The rules involved are rules 2, 3, 4, 6, 9, and 10.The phrase "a milk bottle" triggers rule 3 which identifies it as a NNP (noun-noun phrase).Then since the string has been identified as an NNP, rule 4 is triggered and a new class is created and the new class is a subset of the class representing bottles.Rule 6 is also triggered by the addition of the instances of the consequents of rules 3 and 4 and by our previous input sentences asserting that "A bottle is a container" and "Milk is a fluid".As a result, additional knowledge is built into the network concerning the new sub-class of bottles: the function of this new class is to contain milk. Then since "a table" satisfies the conditions for rule 2, it is identified as a GNP, rule 9 is finally triggered, and a structure is built into the network representing the concept that a member of the set of bottles for containing milk is on a member of the set of tables. The antecedents of rule i0 are satisfied by this member of the set of bottles for containing milk, and an assertion is added to the effect that the function of this member is also to contain milk. The computer responds "I UNDERSTAND THAT . . ." only when a sructure has been built which the sentence EXPRESSES.In order to further ascertain whether the system has understood the input sentence, we can query the system as follows. The system's core rules again parse the query, deduce the answer, and generate a phrase to express the answer.We now input the sentence '~ glass bottle is on a desk" to be parsed and processed by the rules of Section 6.Processing of this sentence is similar to that of the previous sentence, except that rule 5 will be triggered instead of rule 6 since the system has been informed that glass is a material. Since the string "a glass bottle"is a noun-noun phrase, glass is a subset of material, and bottle is a subset of container, a new class is created which is a subset of bottles and the characteristic of this class is to be made of glass. The remainder of the sentence is processed in the same way as the previous input sentence, until finally a structure is built to represent the proposition that a member of the set of bottles made of glass is on a member of the set of desks. Again, this proposition is linked to the input sentence by an EXPRESSES relation.When we input the sentence (again preceded by the **) to the system, it responds with its conclusion as shown here.To make sure that the system understands the difference between "glass bottle" and "milk bottle", we query the system relative to the item on the desk:We now try "A water bottle is on a bar", but the system cannot fully understand this sentence since it has no knowledge about water.We have not t01d the system whether water is a fluid or a material.Therefore, rules 3 and 4 are triggered and a node is built to represent this new class of bottles, but no assertion is built concerning the properties of these bottles.Since only three of the four antecedents of rule 6 are satisfied, processing of this rule is suspended.Rule 9 is triggered, however, since all of its antecedents are satisfied, and therefore an assertion is built into the network representing the proposition that a member of a subset of bottles is on a member of the class of bars.Thus the system replies that it has understood the input sentence, but really has not fully understood the phrase "a water bottle" as we can see when we query the system. It does not respond that it is "a bottle for containing water".Essentially, the phrase "water bottle" is ambiguous for the system.It might mean '%ottle for containing water", 'bottle made of water", or something else. The system's '~epresentation" of this ambiguity is the suspended rule processing. Meanwhile the parts of the sentence which are "comprehensible" to the system have been processed and stored. After we tell the system '~ater is a fluid", the system resumes its processing of rule 6 and an assertion is established in the network representing the concept that the function of this latest class of bottles is to contain water. The ambiguity is resolved by rule processing being completed in one of the ways which were previously possible.We can then query the system to show its understanding of what type of bottle is on the bar.This example demonstrates two features of the system:I) The combined use of syntactic and semantic information in the processing of surface strings.This feature is one of the primary benefits of having not only syntactic and semantic, but also hybrid rules.2) The use of bi-directional inference to use later information to process or disambiguate earlier strings, even across sentence boundaries.The question-answering/generation facility of the NL-system, mentioned briefly in Section 2, is completely rule-based. When a query such as 'What is a bottle?" is entered into the system, the sentence is parsed by rules of the core in conjunction with user-defined rules. That is, rule 2 of Section 6 would identify "a bottle" as a GNP, but the top level parse of the input string is accomplished by a core rule. The syntax and corresponding semantics designated by rules 7 and 8 of Section 6 form the basis of the core rule. Our current system does not enable the user to specify the syntax and semantics of questions, so the core rules which define the syntax and consequents of a question were coded specifically for the example of this paper, we intend to pursue this issue in the future. Currently, the two types of questions that our system can process are:WHAT IS <NP> ? WHAT IS <RELATION> <NP> ? Upon successful parse of the query, the system engages in a deduction process to determine which set is a superset of the set of bottles. This process can either find an assertion in the network answering the query or, if necessary, the process can utilize bi-directional inference, initiated in backword-chaining mode, to deduce an answer.In this instance, the network structure dominated by node M75 of figure 3 is found as the answer to the query. This structure asserts that the set of bottles is a subset of the set of containers.Another deduction process is now initiated to generate a surface string to express this structure. For the purpose of generation, we have deliberately not used the input strings which caused the semantic network structures to be built. If we had deduced a string which EXPRESSES node M75, the system would simply have found and repeated the sentence represented by node M90 of figure 3. We plan to make use of these surface strings in future work, but for this study, we have employed a second "expresses" relation, which we call EXPRESS-2, and rules of the core to ><lXi)< J Figure 4 . Network representation of a generated surface string.generate surface strings to express, semantic structures. Figure 4 illustrates the network representation of the surface string generated for node M75. The string "A bottle", dominated by node M221, is generated for node M54 of figure 3, expressing an arbitrary member of the set of bottles. The string "a container", dominated by node M223, is generated to express the set of containers, represented by node M62 of figure 3. Finally, the surface string "A bottle is a container", represented by node M226, is established to express node M75 and the answer to the query.In general, a surface sentence is generated to EXPRESS-2 a given semantic structure by first generating strings to EXPRESS-2 the substructures of the semantic structure and by assembling these strings into a network version of a list. Thus the semantic structure is processed in a bottom-up fashion.The structure of the generated string is a phrase-structured representation utilizing FIRST and REST pointers to the sub-phrases of a string. This representation reflects the subordinate relation of a phrase to its "parent"phrase. The structures pointed to by the FIRST and REST arcs can be a) another list structure with FIRST and REST pointers; b) a string represented by a node such as Mg0 of figure 3 with BEG, END, and CAT arcs; or c) a node with WORD arc to a word and an optional PRED arc to another node with PRED and WORD arcs. After the structure representing the surface string has been generated, the resulting list or tree is traversed and the leaf nodes printed as response.Our goal is to design a NLU system for a linguistic theorist to use for language processing. The system's linguistic knowledge should be available to the theorist as domain knowledge. As a result of our preliminary study of a KE approach to Natural Language Understanding, we have gained valuable experience with the basic tools and concepts of such a system. All aspects of our NL-system have, of course, undergone many revisions and refinements during development and will most likely continue to do so.During the course of our study, we have a) developed two representations of a surface string: I) a linear representation appropriate for input strings as shown in figure i; and 2) a phrase-structured representation appropriate for generation, shown in figure 4; b) designed a set of SNePS rules which are capable of analyzing the user's natural language input rules and building the corresponding network rules; c) identified basic concepts essential for linguistic analysis: lexical category, phrase category, relation between a string and lexical constituent, relation between a string and substrimg, the expresses relations between syntactic structures and a semantic structures, and the concept of a variable that the user may wish to use in input rules; d) designed a set of SNePS rules which can analyze some simple queries and generate a response.As our system has evolved, we have striven to reduce the amount of core knowledge which is essential for the system to function.We want to enable the user to define the language processing capabilities of the system~ but a basic core of rules is essential to process the user's initial lexicon entries and rules.One of our high priority items for the immediate future is to pursue this issue. Our objective is to develop the NL-system into a boot-strap system to the greatest degree possible. That is, with a minimal core of pre-programmed knowledge, the user will input rules and assertions to enhance the system's capability to acquire both linguistic and nonlinguistic knowledge. In other words, the user will define his own input language for entering knowledge into the system and conversing with the system.Another topic of future investigation will be the feasibility of extending the user's control over the system's basic tools by enabling the user to define the network Case frames for syntactic and semantic knowledge representation.We also intend to extend the capability of the system so as to enable the user to define the syntax of questions and the nature of response.This study explores the realm of a Knowledge Engineering approach to Natural Language Understanding. A basic core of NL rules enable the NLU expert to input his natural language rules and his lexicon into the semantic network knowledge base in natural lan~uame.In this system, the rules and assertions concerning both semantic and syntactic knowledge are stored in the network and undergo interaction during the deduction processes.An example was presented to illustrate: entry of the user's lexicon into the system; entry of the user's natural language rule statements into the system; the types of rule statements which the user can utilize; how rules build conceptual structures from surface strings; the use of knowledge for disambiguating surface structure; the use of later information for disamhiguating an earlier, partially understood sentence; the question-answering~generation facility of the NL-system.
Appendix:
| null | null | null | null | {
"paperhash": [
"lehnert|the_process_of_question_answering",
"robinson|diagram:_a_grammar_for_dialogues",
"mckay|multi_-_a_lisp_based_multiprocessing_system",
"haas|an_approach_to_acquiring_and_applying_knowledge",
"hayes|flexible_parsing",
"shapiro|generalized_augmented_transition_network_grammars_for_generation_from_semantic_networks",
"kaplan|a_multi-processing_approach_to_natural_language",
"findler|associative_networks-_representation_and_use_of_knowledge_by_computers",
"kay|the_mind_system"
],
"title": [
"The Process of Question Answering",
"DIAGRAM: a grammar for dialogues",
"MULTI - a LISP based multiprocessing system",
"An Approach to Acquiring and Applying Knowledge",
"Flexible Parsing",
"Generalized Augmented Transition Network Grammars for Generation from Semantic Networks",
"A multi-processing approach to natural language",
"Associative Networks- Representation and Use of Knowledge by Computers",
"The MIND System"
],
"abstract": [
"Abstract : Problems in computational question answering assume a new perspective when question answering is viewed as a problem in natural language processing. A theory of question answering has been proposed which relies on ideas in conceptual information processing and theories of human memory organization. This theory of question answering has been implemented in a computer program, QUALM, currently being used by two story understanding systems to complete a natural language processing system which reads stories and answers questions about what was read. The processes in QUALM are divided into 4 phases: (1) Conceptual categorization which guides subsequent processing by dictating which specific inference mechanisms and memory retrieval strategies should be invoked in the course of answering a question; (2) Inferential analysis which is responsible for understanding what the questioner really meant when a question should not be taken literally; (3) Content specification which determines how much of an answer should be returned in terms of detail and elaborations, and (4) Retrieval heuristics which do the actual digging to extract an answer from memory.",
"An explanatory overview is given of DIAGRAM, a large and complex grammar used in an artificial intelligence system for interpreting English dialogue. DIAGRAM is an augmented phrase-structure grammar with rule procedures that allow phrases to inherit attributes from their constituents and to acquire attributes from the larger phrases in which they themselves are constituents. These attributes are used to set context-sensitive constraints on the acceptance of an analysis. Constraints can be imposed by conditions on dominance as well as by conditions on constituency. Rule procedures can also assign scores to an analysis to rate it as probable or unlikely. Less likely analyses can be ignored by the procedures that interpret the utterance. For every expression it analyzes, DIAGRAM provides an annotated description of the structure. The annotations supply important information for other parts of the system that interpret the expression in the context of a dialogue.\nMajor design decisions are explained and illustrated. Some contrasts with transformational grammars are pointed out and problems that motivate a plan to use metarules in the future are discussed. (Metarules derive new rules from a set of base rules to achieve the kind of generality previously captured by transformational grammars but without having to perform transformations on syntactic analyses.)",
"A package of LISP functions, collectively called MULTI, which extends LISP 1.5 to multiprogramming is presented. MULTI defines the notion of a process within a LISP implementation using function invocation as the only control primitive. A process is an executable entity consisting of a process template and a set of register values. The process template defines the operations the process carries out. Process environments are saved in what can be viewed as function call instances, i.e. LISP forms which have the name of a process template in functional position and the register values following it. The flexibility of this simple conceptualization of processes is demonstrated by several examples which use MULTI to implement recursion, backtracking, generators, agendas and AND/OR graph searching. The implementation of MULTI does not assume that the host LISP system provides any data or control environment saving mechanisms such as FUNARG or INTERLISP's spaghetti stack. Thus, MULTI is portable to other LISP systems.",
"The problem addressed in this paper is how to enable a computer system to acquire facts about new domains from tutors who are experts in their respective fields, but who have little or no training in computer science. The information to be acquired is that needed to support question-answering activities. The basic acquisition approach is \"learning by being told.\" We have been especially interested in exploring the notion of simultaneously learning not only new concepts, but also the linguistic constructions used to express those concepts. As a research vehicle we have developed a system that is preprogrammed with deductive algorithms and a fixed set of syntactic/semantic rules covering a small subset of English. It has been endowed with sufficient seed concepts and seed vocabulary to support effective tutorial interaction. Furthermore, the system is capable of learning new concepts and vocabulary, and can apply its acquired knowledge in a prescribed range of problem-solving situations.",
"When people use natural language in natural settings, they often use it ungrammatically, missing out or repeating words, breaking-off and restarting, speaking in fragments, etc., Their human listeners are usually able to cope with these deviations with little difficulty. If a computer system wishes to accept natural language input from its users on a routine basis, it must display a similar indifference. In this paper, we outline a set of parsing flexibilities that such a system should provide. We go on to describe FlexP. a bottom-up pattern-matching parser that we have designed and implemented to provide these flexibilities for restricted natural language input to a limited-domain computer system.",
"The augmented transition network (ATN) is a formalism for writing parsing grammars that has been much used in Artificial Intelligence and Computational Linguistics. A few researchers have also used ATNs for writing grammars for generating sentences. Previously, however, either generation ATNs did not have the same semantics as parsing ATNs, or they required an auxiliary mechanism to determine the syntactic structure of the sentence to be generated. This paper reports a generalization of the ATN formalism that allows ATN grammars to be written to parse labelled directed graphs. Specifically, an ATN grammar can be written to parse a semantic network and generate a surface string as its analysis. An example is given of a combined parsing-generating grammar that parses surface sentences, builds and queries a semantic network knowledge representation, and generates surface sentences in response.",
"Natural languages such as English are exceedingly complicated media for the communication of information, attitudes, beliefs, and feelings. Computer systems that attempt to process natural languages in more than the most trivial ways are correspondingly complex. Not only must they be capable of dealing with elaborate descriptions of how the language is put together (in the form of large dictionaries, grammars, sets of inference strategies, etc.), but they must also be able to coordinate the activities and interactions of the many different components that use these descriptions. For example, speech understanding systems of the sort that are currently being developed under ARPA sponsorship must have procedures for the reception of speech input, phonological segmentation and word recognition, dictionary consultation, and morphological, syntactic, semantic, and pragmatic analyses. The problems of coordination and control are reduced only slightly in less ambitious projects such as question answering, automatic programming, content analysis, and information retrieval. Of course, large-scale software systems in other domains might rival natural language programs in terms of the number and complexity of individual components. The central theme of the present paper, however, is that natural language control problems have a fundamentally different character from those of most other systems and require a somewhat unusual solution: the many natural language procedures should be conceptualized and implemented as a collection of asynchronous communicating parallel processes.",
"Upon opening this book and leafing through the pages, one gets the impression of an important compendium. The fourteen articles provide good coverage of semantic networks and related systems for representing knowledge. Their average length of 33 pages is long enough to give each author reasonable scope, yet short enough to permit a variety of viewpoints to be expressed in a single volume. The editor should be commended for his efforts in putting together a wellorganized book instead of just another collection of unrelated papers.",
"The MIND system is a single computer program incorporating an extensible set of fundamental linguistic processors that can be combined on command to carry out a great variety of tasks from grammar testing to question-answering and language translation. The program is controlled from a graphic display console from which the user can specify the sequence of operations, modify rules, edit texts and monitor the details of each operation to any desired extent. Presently available processors include morphological and syntactic analyzers, a semantic file processor, a transformational component, a morphological synthesizer, and an interactive disambiguator."
],
"authors": [
{
"name": [
"W. Lehnert"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"Jane J. Robinson"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"D. McKay",
"S. Shapiro"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"Norman Haas",
"G. Hendrix"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"P. Hayes",
"G. Mouradian"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"S. Shapiro"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"R. Kaplan"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"N. Findler"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"M. Kay",
"G. Martins"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
}
],
"arxiv_id": [
null,
null,
null,
null,
null,
null,
null,
null,
null
],
"s2_corpus_id": [
"57370597",
"17788520",
"359906",
"7704586",
"11007680",
"4247142",
"18981720",
"15616277",
"43606749"
],
"intents": [
[
"methodology"
],
[],
[],
[],
[],
[],
[
"background"
],
[],
[]
],
"isInfluential": [
false,
false,
false,
false,
false,
false,
false,
false,
false
]
} | Problem: This paper aims to investigate the feasibility and effectiveness of a Knowledge Engineering approach to Natural Language Understanding (NLU) by developing a rule-based computer system that utilizes a semantic network for knowledge storage and representation.
Solution: The hypothesis is that by enabling the input of linguistic knowledge in natural language, facilitating reasoning and inference tracing, and allowing for the interaction between syntactic and semantic knowledge, the developed system will enhance the NLU expert's ability to add, understand, and utilize knowledge for language processing tasks. | 512 | 0.029297 | null | null | null | null | null | null | null | null |
7677c34ce2c4ca02caf93c0dc4909129c4d8abb8 | 6224861 | null | Reflections on 20 Years of the {ACL}: An Introduction | Our society was founded on 13 June 1962 as the Association for Machine Translation and Computational Linguistics. Consequently, this 1982 Annual Meeting represents our 20th anniversary. We did, Of course, change our name to the Association for Computational Linguistics in 1968, but that did not affect the continuity of the organization. The date of this panel, 17 June, misses the real anniversary by four days, but no matter; the occasion still allows us to reflect on where we have been and where we are going. I seem to be sensitive to opportunities for celebrations. In looking through my AMTCL/ACL correspondence over the years, I came across a copy of a memo sent to Bob Simmons and Hood Roberts during our lOth anniversary year, recommending that something in commemoration might be appropriate. I cannot identify anything in the program of that meeting or in my notes about it that suggests they took me seriously then, but that reflects the critical difference between volunteering a recommendation and Just plain volunteerlngl My invitation to participate in this panel was sent out to the presidents of the Association, who were, in order, | {
"name": [
"Walker, Donald E."
],
"affiliation": [
null
]
} | null | null | 20th Annual Meeting of the Association for Computational Linguistics | 1982-06-01 | 0 | 0 | null | Our society was founded on 13 June 1962 as the Association for Machine Translation and Computational Linguistics. Consequently, this 1982 Annual Meeting represents our 20th anniversary.We did, Of course, change our name to the Association for Computational Linguistics in 1968, but that did not affect the continuity of the organization.The date of this panel, 17 June, misses the real anniversary by four days, but no matter; the occasion still allows us to reflect on where we have been and where we are going.I seem to be sensitive to opportunities for celebrations.In looking through my AMTCL/ACL correspondence over the years, I came across a copy of a memo sent to Bob Simmons and Hood Roberts during our lOth anniversary year, recommending that something in commemoration might be appropriate.I cannot identify anything in the program of that meeting or in my notes about it that suggests they took me seriously then, but that reflects the critical difference between volunteering a recommendation and Just plain volunteerlngl Bill, Aravind, Stan, Paul, Jon, Ron, Bonnie, and Norm agreed to Join me on the panel. Jane refused on the grounds that she was not yet part of history and that her Presidential Address provided ample platform to convey her reflections. Win, Paul, Susumo, and Bob Barnes were not able to come, and Hood was still waffling when this piece was being written.Vic, Dave, Win, Bob Simmons, Aravind, Paul, Jon, Norm, and I have written down some of our reflections; they appear on the following pages.My charge to the panelists, with respect to both oral and written tradition, was quite broad: "You are asked to reflect on significant experiences during your tenure of that office, in particular as they reflect on the state of computational linguistics then and now, and perhaps with some suggestions for what the future will bring." The written responses are varied, as you can see; I am sure that the oral responses will prove to be equally so.To provide some perspective--and record some history, I am attaching a synopsis of "officers, editors, committees, meetings, and program chairing" (please let me know about errors!).It is interesting to note the names of people--many of whom are still prominent in the field, the practices associated with our annual meetings, and our publication history.I will comment on the latter two.Our first meeting was held in conjunction with the 1963 ACM National Conference, but it is clear that our primary allegiance has been with the Linguistic Society of America, since we met seven times in conjunction with its summer meetings.For a period, we alternated between the LSA and the Spring Joint Computer Conference--and actually included that schedule in our membership flyer.We This proposal was submitted to the National Science Foundation for support. A grant was approved, but it stipulated that we publish a microfiche-only Journal, and we did that until 1978, The Finite String being issued as a separate microfiche during this period.It became increasingly clear during the five microfiche years that the micropublishing industry was not going to develop as predicted in the early 1970s.Microfiche readers that were both inexpensive and convenient had not materialized, and our members were reluctant to commit their manuscripts to a medium that restricted readership to a dedicated few.Consequently, George Heldorn set about converting the AJCL to a printed Journal, the first issue of which appeared in 1980. Respectful of its microformal origins, it is distributed with a microfiche that duplicates the printed version but sometimes contains additional material. The Finite Strin~ Newsletter continues to provide general information of interest to the membership as a special section. | null | null | null | null | Main paper:
:
Our society was founded on 13 June 1962 as the Association for Machine Translation and Computational Linguistics. Consequently, this 1982 Annual Meeting represents our 20th anniversary.We did, Of course, change our name to the Association for Computational Linguistics in 1968, but that did not affect the continuity of the organization.The date of this panel, 17 June, misses the real anniversary by four days, but no matter; the occasion still allows us to reflect on where we have been and where we are going.I seem to be sensitive to opportunities for celebrations.In looking through my AMTCL/ACL correspondence over the years, I came across a copy of a memo sent to Bob Simmons and Hood Roberts during our lOth anniversary year, recommending that something in commemoration might be appropriate.I cannot identify anything in the program of that meeting or in my notes about it that suggests they took me seriously then, but that reflects the critical difference between volunteering a recommendation and Just plain volunteerlngl Bill, Aravind, Stan, Paul, Jon, Ron, Bonnie, and Norm agreed to Join me on the panel. Jane refused on the grounds that she was not yet part of history and that her Presidential Address provided ample platform to convey her reflections. Win, Paul, Susumo, and Bob Barnes were not able to come, and Hood was still waffling when this piece was being written.Vic, Dave, Win, Bob Simmons, Aravind, Paul, Jon, Norm, and I have written down some of our reflections; they appear on the following pages.My charge to the panelists, with respect to both oral and written tradition, was quite broad: "You are asked to reflect on significant experiences during your tenure of that office, in particular as they reflect on the state of computational linguistics then and now, and perhaps with some suggestions for what the future will bring." The written responses are varied, as you can see; I am sure that the oral responses will prove to be equally so.To provide some perspective--and record some history, I am attaching a synopsis of "officers, editors, committees, meetings, and program chairing" (please let me know about errors!).It is interesting to note the names of people--many of whom are still prominent in the field, the practices associated with our annual meetings, and our publication history.I will comment on the latter two.Our first meeting was held in conjunction with the 1963 ACM National Conference, but it is clear that our primary allegiance has been with the Linguistic Society of America, since we met seven times in conjunction with its summer meetings.For a period, we alternated between the LSA and the Spring Joint Computer Conference--and actually included that schedule in our membership flyer.We This proposal was submitted to the National Science Foundation for support. A grant was approved, but it stipulated that we publish a microfiche-only Journal, and we did that until 1978, The Finite String being issued as a separate microfiche during this period.It became increasingly clear during the five microfiche years that the micropublishing industry was not going to develop as predicted in the early 1970s.Microfiche readers that were both inexpensive and convenient had not materialized, and our members were reluctant to commit their manuscripts to a medium that restricted readership to a dedicated few.Consequently, George Heldorn set about converting the AJCL to a printed Journal, the first issue of which appeared in 1980. Respectful of its microformal origins, it is distributed with a microfiche that duplicates the printed version but sometimes contains additional material. The Finite Strin~ Newsletter continues to provide general information of interest to the membership as a special section.
Appendix:
| null | null | null | null | {
"paperhash": [],
"title": [],
"abstract": [],
"authors": [],
"arxiv_id": [],
"s2_corpus_id": [],
"intents": [],
"isInfluential": []
} | null | 512 | 0 | null | null | null | null | null | null | null | null |
5dd0ee52ba49cf9bbb63adf51b8a9f2625a14523 | 5189365 | null | Salience: The Key to the Selection Problem in Natural Language Generation | We argue that in domains where a strong notion of salience can be defined, it can be used to provide: (I) an elegant solution to the selection problem, i.e. the problem of how to decide whether a given fact should or should not be mentioned in the text; and (2) a simple and direct control framework for the entire deep generation process, coordinating proposing, planning, and realization. (Deep generation involves reasoning about conceptual and rhetorical facts, as opposed to the narrowly linguistic reasoning that takes place during realization.) We report on an empirical study of salience in pictures of natural scenes, and its use in a computer program that generates descriptive paragraphs comparable to those produced by people. I. The Selection Problem At the heart of research on natural language generation is the question of how to decide what to say and, equally important, what not to say. This is the "selection problem", and it has been approached in various ways in the past: Direct translation generators such as [Swartout 1981, Clancey to appear] avoid the problem by leaving the decision to the original designer of the data structures that serve as the templates to the generator; this places the burden on that designer to correctly anticipate what degree of detail and presupposed knowledge will be appropriate to a specific audience since on-line adjustments are not possible. I. | {
"name": [
"Conklin, E. Jeffrey and",
"McDonald, David D."
],
"affiliation": [
null,
null
]
} | null | null | 20th Annual Meeting of the Association for Computational Linguistics | 1982-06-01 | 12 | 44 | null | null | null | them all through a special filter, leaving out those that are judged to be already known to the audience and letting through those that are new. McKeown [1981] uses a similar technique --her generator, like Mann and Moore's, must examine every potentially mentionable object in the domain data base and make an explicit judgement as to whether to include it. We argue that in a task domain where salience information is available such filters are unnecessary because we can simply define a cut-off salience level below which an object is ignored unless independently required for rhetorical reasons.The most elaborate and heuristic systems to date use meta-knowledge about the facts in the domain and the listener's knowledge of them to plan utterances to achieve some desired effect.Cohen [1978] used speech-act theory to define a space of possible utterances and the goals they could achieve, which he searched by using backwards chaining.Appelt [1982] uses a compiled form of this search procedure which he encodes using Saccerdotti's procedural nets; he is able to plan the achievement of multiple rhetorical goals by looking for opportunities to "piggyback" additional phrases (sub-plans) into pending plans for utterances.We argue that in domains where salience information is already available, such thorough deliberations are often unnecessary, and that a straight-forward enumeration of the domain objects according to their relative salience, augmented with additional rhetorical and stylistic information on a strictly local basis, is sufficient for the demands of the task.In this paper we present an approach to deep generation that uses the relative salience of the objects in the source data base to control the order and detail of their presentation in the text. We follow the usual view that natural language generation is divided into two interleaved phases: one in which selection takes place reflecting the speaker's goals, and the selected material is composed into a (largely conceptual) ,realization specification ,,I (abbreviated "r-spec") according to high-level rhetorical and stylistic conventions, and a second in which the r-spec is realized --the text actually produced --in accordance with the syntactic and morphological rules of the language. We call the first phase "deep generation" --instead of the more specific term "planning" --to reflect our view that its use of actual planning techniques will be limited when compared to their use in the generators developed by Cohen, Appelt, or Mann and Moore.We are developing our theory of deep generation in the context of a computer program that produces simple paragraphs describing photographs of natural scenes similar to those analyzed by the UMass VISIONS System [Hanson and Riseman 1978, Parma 1980] .Our input is a mock-up of their final analysis of the scene, including a mock-up annotation of the salience of all of the objects and their properties as would be identified by VISIONS; this representation is expressed in a locally developed version of KL-ONE.The paragraphs are realized using MUMBLE [McDonald 1981 [McDonald , 1982 , which is responsible for all low-level linguistic decisions and for carrying out the rhetorical directives given in the r-spec.I. We are introducing this new term --"realization specification" --in place of the term ,,message 'r which had been used in earlier ~ ublications on McDonald's generation sy §tem.his is a change in name only: these Objects have the same formal properties as before.The shift reflects the kind of communication metaphor on which this work has actually been based: the old term has often connoted a view of communication as a process of translating a data structure in the speaker's head into language and then reconstructing it in the audience's head.(the so-called "conduit" metaphor). Instead, we take it that a speaker has a set of goals whose realization may entail entirely d~¢fe-ent utterances depending upon who the a~dience is and what they already know; that the speaker's knowledge of their language consist 9 in large part of a catalog of wnat might be saia and the effects it is likely to have on the audience; and that, accordingly, language generation entails a plannin~ process, selecting among these effects according to the desired outcome.As of the beginning of February 1982, the initial version of the deep generation phase has been designed and implemented. Figure I shows the kind of scene we are using in our studies and an example of the kind of paragraph description targeted for our system. Efforts to "This is a picture of a large white house with a white fence in front of it.In front of the fence is a cement sculpture.In front of this is a street, Across the street is a grassy patch with a white mailbox.There are trees all around, with one evergreen to the right of the driveway, which runs next to the house. It is fall, the sky is overcast, and the ground is wet." Figure I . One of the pictd~es used in the experimental studies with one of the subjects' descriptions of it.A mocked-up analysis of this picture was used as the input to the deep generation process in the example discussed below. modify MUMBLE to run in NIL on our VAX are underway, and we anticipate having an initial realization dictionary up and the first texts produced before the end of May.During the summer and fall of 1981, Jeff Conklin (Conklin and Ehrlich, in preparation) carried out the series of psychological experiments discussed immediately below.The results have been use~ to determine the salience ratings for the mock-up of the analyzed scenes, and to provide a corpus of the kinds of texts people actually produce as descriptions of scenes of suburban houses.Our theory of visual salience states that a given person looking at a given picture in a given context assigns a salience (an ordering, rather than a numeric value) to each object as a natural and automatic part of the process of perceiving and organizing the scene. In several experiments the subjects were given a second task: writing a description of the same pictures for which they were doing the rating task (one such description appears in Figure I ). In these experiments the series of pictures was shown twice; in the first viewing, half of the subjects did the rating task and the other half did the description task, while in the second viewing the tasks were reversed, (It turned out that the description task had no significant effect on the rating scores.)Although we are still analyzing the data from these experiments, _there are several interesting results.The rating technique is a Also, it was found that salience was a strong determinant in the order of mention of objects in the paragraphs.Specifically, the higher the salience rating given an object by a subject, the more likely that object was to appear in the subject's description.Furthermore, there was a good correlation between the ranking of the objects (by decreasing salience) and the order in which the objects were mentioned in the description. Interestingly, the exceptions to a perfect correlation were generally the cases where a low salience item was "pulled up" into an earlier position in the text, seemingly for rhetorical reasons.The explanation that we propose is that salience is the primary force in selection in scene descriptions, but that rhetorical factors can override it (as illustrated below).Here is an short example of the kind of paragraph which our system currently generates:"This is a picture of a white house with a fence in front of it. The house has a red door and the fence has s red gate.Next to the house is a driveway. In the foreground is s mailbox.It is a cloudy winter day."This paragraph was generated from a perceptual representation (in KL-ONE) in which the most salient objects, in order of decreasing salience, were:House, Fence, Door, Driveway, Gate, and Mailbox.The deep generation component (called GENARO) maintains this list as the "Unmentioned SalientObjects List" (USOL), and it is this data structure which mediates between GENARO and the domain data base (see Figure 3 ). It should be stressed that the USOL contains only objects -not properties of objects or relationships between objects --since we specifically claim that such an "object-driven" approach is not only more natural but also is adequate to the task.There are two "registers" which are used for focus: "Current-Item" and "Main-Item". The Current-Item register contains the object currently in focus (and hence the most salient object which has not previously been mentioned), and the Main-Item register points to the data base's most salient object as the topic of the entire paragraph (this register is set once at the beginning of the paragraph generation process).An object moves into focus by being "popped" from the USOL and placed in the Figure ~ . ~ Liock diagram of the GENARO system. The "O"s in the "Data Base" represent objects in the domain representation, whereas the "~"s are the themeatic "shadows" of these objects used by GENARO for its rhetorical processing. Each of the ovals in the "Rhetorical Rules" box are packets containing one or more rhetorical rules. The r-spec can thus be thought of as a "molecule", each of whose "atoms" is the result of a successful rule. The atoms are "specification elements" to be processed by MUMBLE; they are either objects, properties, or relations from the domain, or rhetorical instructions that originate with GENARO. (N.b. In the course of producing a paragraph many r-specs will pass from GENARO toThe flow of the paragraph is determined by which rules are turned on --via the Paragraph Driver's control of which packets are on --and each r-spec is produced "locally", without an awareness of previous r-specs or a planning of future ones.)GENARO starts with an empty message buffer and with Current-item (in our example) set to House, the first item in the Unused Salient Object List.The Introduce packet, which is turned on initially, has a rule which proposes to "Introduce(House)"; this rule's conditions are that the value of the Current-Item be value of the Main-Item (i.e. the Main-Item is in focus), and that the salience of the Main-Item be above some specified threshold. In this example both of these conditions are met, and the "atom" Introduce(House) is proposed at a high rhetorical priority, thus guaranteeing not only that it will be included in the first r-spec, but that it will be the dominant atom in that r-spec.Another rule (in the Elaborate packet), proposes including the color of the house (e.g. Color(House,White)), not because the color is itself salient, but to "flesh out" the. introductory sentence. This rule is included because we noticed that salient items were rarely mentioned as "bare" objects --some property was always given.(Note also that there are other rules that propose mentioning properties of objects on other grounds, i.e. because the property itself is salient.)Finally, there is a rule which notices that Fence is both quite salient and directly related to the current topic, and so proposes In-Front-Of(Fence, House).Since the r-spec now contains three atoms and there are no strong grounds based on salience or considerations of style to continue adding to it, the r-spec is sent (via a narrow bandwith system message) to the process MUMBLE, which immediately starts realizing it. MUMBLE's dictionary contains entries for all of the symbols used in the r-spec, e.g. Introduce, In-front'of, House, etc., which are used to construct a linguistic phrase marker which then controls the realization process, outputing "This is a picture of a white house with a fence in front of it.".Back in GENARO, after the r-spec was sent, the Introduce packet was turned off, the message buffer cleared, Door (the next unused object) removed from the USOL and placed in the Current-Item register, and the Iterative Proposing process started over.In building the next r-spec, Part-of(Door, House) and Color(Door, Red) are inserted, by rules similiar to the ones described above. Suppose, however, that there are no other salient relations or properties to mention about the Current-Item Door: nothing of high rhetorical priority is left to be proposed (n.b. once a rule's proposal is accepted that rule turns itself off until that r-spec is complete). There is, however, a rule called "Condense" which looks for rhetorical parallels and proposes them at low priority (i.e. they only win when there are no, more useful, rhetorical effects which apply).Condense notices that both Door (the Current-Item) and Gate (which is somewhere "down" in the USOL) have the property Red, and that the salience of Gate and of the property Color(Gate, Red) are above the appropriate thresholds, and so proposes that Gate be made the local focus. When this action is taken, a conjunction marker is added to the r-spec, and Gate is pulled out of the USOL and made the Current-item.The r-spec created by these actions is realized as "The house has a red door and the fence has a red gate.".When the USOL is empty the Conclude packet is turned on, and a rule in it proposes the r-spec about the lighting in the picture.(The facts about "cloudy" and "winter" are present in the perceptual representation --no extra generation work was done to make that message.)'One of the issues that we are using GENARO to investigate is that in their written descriptions people sometimes "chain" spatially through a picture, linking objects which are spatially close to each other or are in certain other strong relationships to each other. The paragraph in Figure I contains a good example of this style --the rhetorical skeleton is: This is a picture of an A with a B in front of it.In front of the B is a C. In front of the C is a D. Across the D is an E.As can be seen by inspecting the picture in Figure I, A thru E (i.e. house, fence, sculpture, street, and grassy patch) are arrayed from background to foreground in the picture in a way which allows the "in-front-of" relation to be used between them. I The question is: By what mechanism do we allow the strong spatial links between these items to override the system's basic strategy of mentioning objects in the order of decreasing salience?The first part of the answer is that the machinery for such chaining already exists in the way the Current-Item register is used (and can be reset) by the rhetorical rules.Since one of the actions rules are allowed is to reset the Current-Item to some object, a rule can be written which says "If the Current-Item has a salient relationship Relation to object X, then propose Relatlon(Current-Item,X) and make X the Current-Item".This rule (let's call it Chain) would have the effect of chaining from object to object as long as no other rules had a higher I. "Across" in this case would be a lexical variation on "in-front-of" introduced deliberately by MUMBLE to break up the repetition.(rhetorical) priority and the various "Relation"'s of the respective Current-Items were salient enough to satisfy the rule's condition.But this kind of chaining would only happen as the result of a happy series of the right local decisions --each successful firing of Chain would be independent of the others. Furthermore, there would be no guarantee that the successive "Relation"'s would be the same, as is the case in the above example.What is needed, perhaps, is to give Chain the ability to look at the structure of the evolving r-spec and to notice when there is an opportunity to build upon a structural parallel (e.g. X in front of Y, Y in front of Z). We are currently investigating ways to make this kind of structural parallel visible within r-specs and still maintain them as a concise and narrow-bandwidth channel between GENARO and MUMBLE. | null | null | Main paper:
:
them all through a special filter, leaving out those that are judged to be already known to the audience and letting through those that are new. McKeown [1981] uses a similar technique --her generator, like Mann and Moore's, must examine every potentially mentionable object in the domain data base and make an explicit judgement as to whether to include it. We argue that in a task domain where salience information is available such filters are unnecessary because we can simply define a cut-off salience level below which an object is ignored unless independently required for rhetorical reasons.The most elaborate and heuristic systems to date use meta-knowledge about the facts in the domain and the listener's knowledge of them to plan utterances to achieve some desired effect.Cohen [1978] used speech-act theory to define a space of possible utterances and the goals they could achieve, which he searched by using backwards chaining.Appelt [1982] uses a compiled form of this search procedure which he encodes using Saccerdotti's procedural nets; he is able to plan the achievement of multiple rhetorical goals by looking for opportunities to "piggyback" additional phrases (sub-plans) into pending plans for utterances.We argue that in domains where salience information is already available, such thorough deliberations are often unnecessary, and that a straight-forward enumeration of the domain objects according to their relative salience, augmented with additional rhetorical and stylistic information on a strictly local basis, is sufficient for the demands of the task.In this paper we present an approach to deep generation that uses the relative salience of the objects in the source data base to control the order and detail of their presentation in the text. We follow the usual view that natural language generation is divided into two interleaved phases: one in which selection takes place reflecting the speaker's goals, and the selected material is composed into a (largely conceptual) ,realization specification ,,I (abbreviated "r-spec") according to high-level rhetorical and stylistic conventions, and a second in which the r-spec is realized --the text actually produced --in accordance with the syntactic and morphological rules of the language. We call the first phase "deep generation" --instead of the more specific term "planning" --to reflect our view that its use of actual planning techniques will be limited when compared to their use in the generators developed by Cohen, Appelt, or Mann and Moore.We are developing our theory of deep generation in the context of a computer program that produces simple paragraphs describing photographs of natural scenes similar to those analyzed by the UMass VISIONS System [Hanson and Riseman 1978, Parma 1980] .Our input is a mock-up of their final analysis of the scene, including a mock-up annotation of the salience of all of the objects and their properties as would be identified by VISIONS; this representation is expressed in a locally developed version of KL-ONE.The paragraphs are realized using MUMBLE [McDonald 1981 [McDonald , 1982 , which is responsible for all low-level linguistic decisions and for carrying out the rhetorical directives given in the r-spec.I. We are introducing this new term --"realization specification" --in place of the term ,,message 'r which had been used in earlier ~ ublications on McDonald's generation sy §tem.his is a change in name only: these Objects have the same formal properties as before.The shift reflects the kind of communication metaphor on which this work has actually been based: the old term has often connoted a view of communication as a process of translating a data structure in the speaker's head into language and then reconstructing it in the audience's head.(the so-called "conduit" metaphor). Instead, we take it that a speaker has a set of goals whose realization may entail entirely d~¢fe-ent utterances depending upon who the a~dience is and what they already know; that the speaker's knowledge of their language consist 9 in large part of a catalog of wnat might be saia and the effects it is likely to have on the audience; and that, accordingly, language generation entails a plannin~ process, selecting among these effects according to the desired outcome.As of the beginning of February 1982, the initial version of the deep generation phase has been designed and implemented. Figure I shows the kind of scene we are using in our studies and an example of the kind of paragraph description targeted for our system. Efforts to "This is a picture of a large white house with a white fence in front of it.In front of the fence is a cement sculpture.In front of this is a street, Across the street is a grassy patch with a white mailbox.There are trees all around, with one evergreen to the right of the driveway, which runs next to the house. It is fall, the sky is overcast, and the ground is wet." Figure I . One of the pictd~es used in the experimental studies with one of the subjects' descriptions of it.A mocked-up analysis of this picture was used as the input to the deep generation process in the example discussed below. modify MUMBLE to run in NIL on our VAX are underway, and we anticipate having an initial realization dictionary up and the first texts produced before the end of May.During the summer and fall of 1981, Jeff Conklin (Conklin and Ehrlich, in preparation) carried out the series of psychological experiments discussed immediately below.The results have been use~ to determine the salience ratings for the mock-up of the analyzed scenes, and to provide a corpus of the kinds of texts people actually produce as descriptions of scenes of suburban houses.Our theory of visual salience states that a given person looking at a given picture in a given context assigns a salience (an ordering, rather than a numeric value) to each object as a natural and automatic part of the process of perceiving and organizing the scene. In several experiments the subjects were given a second task: writing a description of the same pictures for which they were doing the rating task (one such description appears in Figure I ). In these experiments the series of pictures was shown twice; in the first viewing, half of the subjects did the rating task and the other half did the description task, while in the second viewing the tasks were reversed, (It turned out that the description task had no significant effect on the rating scores.)Although we are still analyzing the data from these experiments, _there are several interesting results.The rating technique is a Also, it was found that salience was a strong determinant in the order of mention of objects in the paragraphs.Specifically, the higher the salience rating given an object by a subject, the more likely that object was to appear in the subject's description.Furthermore, there was a good correlation between the ranking of the objects (by decreasing salience) and the order in which the objects were mentioned in the description. Interestingly, the exceptions to a perfect correlation were generally the cases where a low salience item was "pulled up" into an earlier position in the text, seemingly for rhetorical reasons.The explanation that we propose is that salience is the primary force in selection in scene descriptions, but that rhetorical factors can override it (as illustrated below).Here is an short example of the kind of paragraph which our system currently generates:"This is a picture of a white house with a fence in front of it. The house has a red door and the fence has s red gate.Next to the house is a driveway. In the foreground is s mailbox.It is a cloudy winter day."This paragraph was generated from a perceptual representation (in KL-ONE) in which the most salient objects, in order of decreasing salience, were:House, Fence, Door, Driveway, Gate, and Mailbox.The deep generation component (called GENARO) maintains this list as the "Unmentioned SalientObjects List" (USOL), and it is this data structure which mediates between GENARO and the domain data base (see Figure 3 ). It should be stressed that the USOL contains only objects -not properties of objects or relationships between objects --since we specifically claim that such an "object-driven" approach is not only more natural but also is adequate to the task.There are two "registers" which are used for focus: "Current-Item" and "Main-Item". The Current-Item register contains the object currently in focus (and hence the most salient object which has not previously been mentioned), and the Main-Item register points to the data base's most salient object as the topic of the entire paragraph (this register is set once at the beginning of the paragraph generation process).An object moves into focus by being "popped" from the USOL and placed in the Figure ~ . ~ Liock diagram of the GENARO system. The "O"s in the "Data Base" represent objects in the domain representation, whereas the "~"s are the themeatic "shadows" of these objects used by GENARO for its rhetorical processing. Each of the ovals in the "Rhetorical Rules" box are packets containing one or more rhetorical rules. The r-spec can thus be thought of as a "molecule", each of whose "atoms" is the result of a successful rule. The atoms are "specification elements" to be processed by MUMBLE; they are either objects, properties, or relations from the domain, or rhetorical instructions that originate with GENARO. (N.b. In the course of producing a paragraph many r-specs will pass from GENARO toThe flow of the paragraph is determined by which rules are turned on --via the Paragraph Driver's control of which packets are on --and each r-spec is produced "locally", without an awareness of previous r-specs or a planning of future ones.)GENARO starts with an empty message buffer and with Current-item (in our example) set to House, the first item in the Unused Salient Object List.The Introduce packet, which is turned on initially, has a rule which proposes to "Introduce(House)"; this rule's conditions are that the value of the Current-Item be value of the Main-Item (i.e. the Main-Item is in focus), and that the salience of the Main-Item be above some specified threshold. In this example both of these conditions are met, and the "atom" Introduce(House) is proposed at a high rhetorical priority, thus guaranteeing not only that it will be included in the first r-spec, but that it will be the dominant atom in that r-spec.Another rule (in the Elaborate packet), proposes including the color of the house (e.g. Color(House,White)), not because the color is itself salient, but to "flesh out" the. introductory sentence. This rule is included because we noticed that salient items were rarely mentioned as "bare" objects --some property was always given.(Note also that there are other rules that propose mentioning properties of objects on other grounds, i.e. because the property itself is salient.)Finally, there is a rule which notices that Fence is both quite salient and directly related to the current topic, and so proposes In-Front-Of(Fence, House).Since the r-spec now contains three atoms and there are no strong grounds based on salience or considerations of style to continue adding to it, the r-spec is sent (via a narrow bandwith system message) to the process MUMBLE, which immediately starts realizing it. MUMBLE's dictionary contains entries for all of the symbols used in the r-spec, e.g. Introduce, In-front'of, House, etc., which are used to construct a linguistic phrase marker which then controls the realization process, outputing "This is a picture of a white house with a fence in front of it.".Back in GENARO, after the r-spec was sent, the Introduce packet was turned off, the message buffer cleared, Door (the next unused object) removed from the USOL and placed in the Current-Item register, and the Iterative Proposing process started over.In building the next r-spec, Part-of(Door, House) and Color(Door, Red) are inserted, by rules similiar to the ones described above. Suppose, however, that there are no other salient relations or properties to mention about the Current-Item Door: nothing of high rhetorical priority is left to be proposed (n.b. once a rule's proposal is accepted that rule turns itself off until that r-spec is complete). There is, however, a rule called "Condense" which looks for rhetorical parallels and proposes them at low priority (i.e. they only win when there are no, more useful, rhetorical effects which apply).Condense notices that both Door (the Current-Item) and Gate (which is somewhere "down" in the USOL) have the property Red, and that the salience of Gate and of the property Color(Gate, Red) are above the appropriate thresholds, and so proposes that Gate be made the local focus. When this action is taken, a conjunction marker is added to the r-spec, and Gate is pulled out of the USOL and made the Current-item.The r-spec created by these actions is realized as "The house has a red door and the fence has a red gate.".When the USOL is empty the Conclude packet is turned on, and a rule in it proposes the r-spec about the lighting in the picture.(The facts about "cloudy" and "winter" are present in the perceptual representation --no extra generation work was done to make that message.)'One of the issues that we are using GENARO to investigate is that in their written descriptions people sometimes "chain" spatially through a picture, linking objects which are spatially close to each other or are in certain other strong relationships to each other. The paragraph in Figure I contains a good example of this style --the rhetorical skeleton is: This is a picture of an A with a B in front of it.In front of the B is a C. In front of the C is a D. Across the D is an E.As can be seen by inspecting the picture in Figure I, A thru E (i.e. house, fence, sculpture, street, and grassy patch) are arrayed from background to foreground in the picture in a way which allows the "in-front-of" relation to be used between them. I The question is: By what mechanism do we allow the strong spatial links between these items to override the system's basic strategy of mentioning objects in the order of decreasing salience?The first part of the answer is that the machinery for such chaining already exists in the way the Current-Item register is used (and can be reset) by the rhetorical rules.Since one of the actions rules are allowed is to reset the Current-Item to some object, a rule can be written which says "If the Current-Item has a salient relationship Relation to object X, then propose Relatlon(Current-Item,X) and make X the Current-Item".This rule (let's call it Chain) would have the effect of chaining from object to object as long as no other rules had a higher I. "Across" in this case would be a lexical variation on "in-front-of" introduced deliberately by MUMBLE to break up the repetition.(rhetorical) priority and the various "Relation"'s of the respective Current-Items were salient enough to satisfy the rule's condition.But this kind of chaining would only happen as the result of a happy series of the right local decisions --each successful firing of Chain would be independent of the others. Furthermore, there would be no guarantee that the successive "Relation"'s would be the same, as is the case in the above example.What is needed, perhaps, is to give Chain the ability to look at the structure of the evolving r-spec and to notice when there is an opportunity to build upon a structural parallel (e.g. X in front of Y, Y in front of Z). We are currently investigating ways to make this kind of structural parallel visible within r-specs and still maintain them as a concise and narrow-bandwidth channel between GENARO and MUMBLE.
Appendix:
| null | null | null | null | {
"paperhash": [
"mcdonald|language_production:_the_source_of_the_dictionary",
"marcus|a_theory_of_syntactic_recognition_for_natural_language",
"brady|natural_language_generation_as_a_computational_problem:_an_introduction",
"appelt|planning_natural_language_utterances_to_satisfy_multiple_goals",
"mann|computer_generation_of_multiparagraph_english_text"
],
"title": [
"Language Production: the Source of the Dictionary",
"A theory of syntactic recognition for natural language",
"Natural Language Generation as a Computational Problem: an Introduction",
"Planning natural language utterances to satisfy multiple goals",
"Computer Generation of Multiparagraph English Text"
],
"abstract": [
"Ultimately in any natural language production system the largest amount of human effort will go into the construction of the dictionary: the data base that associates objects and relations in the program's domain with the words and phrases that could be used to describe them. This paper describes a technique for basing the dictionary directly on the semantic abstraction network used for the domain knowledge itself, taking advantage of the inheritance and specialization machanisms of a network formalism such as KL-ONE. The technique creates considerable economics of scale, and makes possible the automatic description of individual objects according to their position in the semantic net. Furthermore, because the process of deciding what properties to use in an object's description is now given over to a common procedure, we can write general-purpose rules to, for example, avoid redundancy or grammatically awkward constructions.",
"Abstract : Assume that the syntax of natural language can be parsed by a left-to-right deterministic mechanism without facilities for parallelism or backup. It will be shown that this 'determinism' hypothesis, explored within the context of the grammar of English, leads to a simple mechanism, a grammar interpreter. (Author)",
"This chapter contains sections titled: Introduction, Results for Test Speakers, A Computational Model, The Relationship Between the Speaker and the Linguistics Component, The Internal Structure of the Linguistic Component, An Example, Contributions and Limitations",
"This dissertation presents the results of research on a planning formalism for a theory of natural language generation that incorporates generation of utterances that satisfy multiple goals. Previous research in the area of computer generation of natural language utterances has concentrated on one of two aspects of language production: (1) the process of producing surface syntactic forms from an underlying representation, and (2) the planning of illocutionary acts to satisfy the speaker's goals. This work concentrates on the interaction between these two aspects of language generation and considers the overall problem to be one of refining the specification of an illocutionary act into a surface syntactic form, emphasizing the problems of achieving multiple goals in a single utterance. \nPlanning utterances requires an ability to do detailed reasoning about what the hearer knows and wants. A formalism, based on a possible worlds semantics of an intensional logic of knowledge and action, was developed for representing the effects of illocutionary acts and the speaker's beliefs about the hearer's knowledge of the world. Techniques are described that enable a planning system to use the representation effectively. \nThe language planning theory and knowledge representation are embodied in a computer system called KAMP (Knowledge And Modalities Planner) which plans both physical and linguistic actions, given a high level description of the speaker's goal. \nThe research has application to the design of gracefully interacting computer systems, multiple-agent planning systems, and planning to acquire knowledge.",
"This paper reports recent research into methods for creating natural language text. A new processing paradigm called Fragment-and-Compose has been created and an experimental system implemented in it. The knowledge to be expressed in text is first divided into small propositional units, which are then composed into appropriate combinations and converted into text.KDS (Knowledge Delivery System), which embodies this paradigm, has distinct parts devoted to creation of the propositional units, to organization of the text, to prevention of excess redundancy, to creation of combinations of units, to evaluation of these combinations as potential sentences, to selection of the best among competing combinations, and to creation of the final text. The Fragment-and-Compose paradigm and the computational methods of KDS are described."
],
"authors": [
{
"name": [
"David D. McDonald"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"Mitchell P. Marcus"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"M. Brady",
"R. Berwick"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"D. Appelt"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"W. Mann",
"James A. Moore"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
}
],
"arxiv_id": [
null,
null,
null,
null,
null
],
"s2_corpus_id": [
"18645311",
"6616065",
"63032395",
"60491098",
"112842"
],
"intents": [
[],
[
"background"
],
[],
[],
[
"background"
]
],
"isInfluential": [
true,
false,
false,
false,
false
]
} | Problem: The paper addresses the issue of the selection problem in natural language generation, specifically focusing on how to decide what information to include or exclude in a text.
Solution: The paper proposes that in domains where salience information is available, it can be used as a criterion for determining the relevance of objects or facts to be mentioned in the text, thereby simplifying the control framework for the deep generation process. | 512 | 0.085938 | null | null | null | null | null | null | null | null |
60d2a16423369bc09701a9dfaa03034d4a68fe77 | 30303365 | null | On the Linguistic Character of Non-Standard Input | If natural language understanding systems are ever to cope with the full range of English language forms, their designers will have to incorporate a number of features of the spoken vernacular language. This communication discusses such features as non-standard grammatical rules, hesitations and false starts due to self-correction, systematic errors due to mismatches between the grammar and sentence generator, and uncorrected true errors. | {
"name": [
"Kroch, Anthony S. and",
"Hindle, Donald"
],
"affiliation": [
null,
null
]
} | null | null | 20th Annual Meeting of the Association for Computational Linguistics | 1982-06-01 | 4 | 8 | null | There are many ways in which the input to a natural language system can be non-standard without being uninterpretable ~ Most obviously, such input can be the well-formed output of a grammar other than the standard language grammar with which the interpreter is likely to be equipped. This difference of grammar is presumably what we notice in language that we call "non-standard" in everyday life.Obviously, at least from the perspective of a linguist, it is wrong to think of this difference as being due to errors made by the non-standard language user; it is simply a dialect difference. Secondly, the non-standard input can contain hesitations and self-correctlons which make the string uninterpretable unless some parts of it are edited out. This is the normal state of affairs in spoken language so that any system designed to understand spoken communication, even at a rudimentary level must be able to edit its input as well as interpret it. Thirdly, the input may be ungrammatical even by the rules of the grammar of the speaker but be the expected output of the speaker's sentence generating device.This case has not been much discussed, but it is important because in certain environments speakers (and to some extent unskilled writers) regularly produce ungrammmatical output in preference to grammatically unimpeachable alternatives.Finally, the input t~at the system receives may simply contain uncorrected errors.How important this last source of non-standard input would be in a functioning system is hard to judge and wouldThe discussion in this paper is based an on-going study of the syntactic differences between written and of spoken language funded by the National Institute of Education under grants G78-0169 and G80-0163.depend on the environment of use. Uncorrected errors are, in our experience, reasonably rare in fluent speech but they are more common in unskilled writing.These errors may be typographical, a case we shall ignore in this discussion, or they may be grammatical.Of most interest to us are the cases where the error is due to a language user attempting to use a standard language construction that he/she does not natively command.In the course of this brief communication we shall discuss each of the above cases with examples, drawing on work we have done describing the differences between the syntax of vernacular speech and of standard writing (Kroch and Nindle, 1981) .Our work indicates that these differences are sizable enough to cause problems for the acquisition of writing as a skill, and they may arise'as well when natural language understanding systems come to be used by a wider public. Whether problems will indeed arise is, of course, hard to say as it depends on so many factors.The most important of these is whether natural language systems are ever used with oral, as well as typed-in, language. We do not know whether the features of speech that we will be outlining will also show up in "keyboard" language; for its special characteristics have been little studied from a linguistic point of view (for a recent attempt see Thompson 1980) . They will certainly occur more sporadically and at a lower incidence than they do in speech;and there may be new features of "keyboard" language that are not predictable from other language modes.We shall have little to say about how the problem of non-standard input can be best handled in a working system; for solving that problem will require more research.If we can give researchers working on natural language systems a clearer idea of what their devices are likely to have to cope with in an environment of widespread public use, our remarks will have achieved their purpose.Informal. generally spoken, English exists in a number of regional, class and ethnic varieties, each with its own grammatical peculiarities. Fortunately, the syntax of these dialects is somewhat less varied than the phonology so that we may reasonably approximate the situation by speaking of a general "non-standard vernacular (NV)", which contrasts in numerous ways with standard written English (SWE).Some of the differences between the two dialects can lead to problems for parsing and interpretation.Thus, subject-verb agreement, which is categorical in SWE, is variable in NV. In fact, in some environments subject-verb agreement is rarely indicated in NV, the most notable being sentences with dummy there subjects.Thus, the first of the sentences in (i) is the more likely in NV while, of course, only the second can occur in SWE:(I) a. There was two girls on the sofa. b. There were two girls on the sofa. Since singular number is the unmarked alternative, it occurs with both singular and plural subjects; hence only plural marking on a verb can be treated as a clear signal of number in NV. This could easily prove a problem for parsers that use number marking to help find subject-verb pairs. A further, perhaps more difficult, problem would be posed by another feature of NV, the deletion of relative clause ¢omplementizers on subject relatives.SWE does not allow sentences like those in (2); but they are the most likely form in many varieties of NV and occur quite freely in the speech of people whose speech is otherwise standard:(2) a. Anybody says it is a liar. b. There was a car used to drive by here. Here a parser that assumes that the first tensed verb following an NP that agrees with it is the main verb, will be misled.There are severe constraints on the environments in which subject relatives can appear without a complementizer, apparently to prevent hearers from "garden-pathing" on this construction, but these restrictions are not statable in a purely structural way.A final example of a NV construction which differs from what SWE allows is the use of it for expletive there, as in (3): --(3)It was somebody standing on the corner, This construction is categorical in black English, but it occurs with considerable frequency in the speech of whites as well, at least in Philadelphia, the only location on which we have data. This last example poses no problems in principle for a natural language system; it is simply a grammatical fact of NV that has to be incorporated into the grammar implemented by the natural language understanding system.There are many features like this, each trivial in itself but nonetheless a productive feature of the language.and false starts are a consistent feature of spoken language and any interpreter that -cannot handle them will fail instantly.In one count we found that 52% of the sentences in a 90 minute conversational interview contained at least one instance (Hindle, i981b). Fortunately, the deformation of grammaticality caused by self-correction induced disfluency is quite limited and predictable (Labov, 1966) . With a small set of editing rules, therefore, we have been able to normalize more than 95% of such disfluencies in preprocessing texts for input to a parser for spoken language that we have been constructing (Hindle, 1981b) .These rules are based on the fact that false starts in speech are phonetically signaled, often by truncation of the final syllable.Marking the truncation and other phonetic editing signals in our transcripts, we find that a simple procedure which removes the minimum number of words necessary to create a parsable sequence eliminates most ill-formedness.The spoken language contains as a normal part of its syntactic repertoire constructions like those illustrated below:(4) The problem is is that nobody understands me. (5) That's the only thing he does is fight. (6) John was the only guest who we weren't sure whether he would come. (7) Didn't have to worry about us. These are constructions that it is difficult to accomodate in a linguistically motivated syntax for obvious reasons. Sentence (4) has two tensed verbs; (5), which has been called a "portmanteau construction", has a constituent belonging simultaneously to two different sentences; (6) has a wh-movement construction with no trace (see the discussion in Kroch, 1981) ; and (7) violates the absolute grammatical requirement that English sentences have surface subjects.We do not know why these forms occur so regularly in speech, but we do know that they are extremely common. The reasons undoubtedly vary from construction to construction.Thus, (5) has the effect of removing a heavy NP from surface subject position while preserving its semantic role as subject.Since we know that heavy NPs in subject position are greatly disfavored in speech (Kroch and Hindle, 1981) , the portmanteau construction is almost certainly performing a useful function in simplifying syntactic processing or the presentation of information.Similarly, relative clauses with resumptlve pronouns, like the one in (6), seem to reflect limitations on the sentence planning mechanism used in speech.If a relative clause is begun without computing its complete syntactic analysis, as a procedure like the one in MacDonald (1980) suggests, then a resumptlve pronoun might be used to fill a gap that turned out to occur in a non-deletable position. This account explains why resumptlve pronouns do not occur in writing. They are ungrammatical and the real-tlme constraints on sentence planning that cause speech to be produced on the basis of limited look-ahead are absent. Subject deletion, illustrated in 7, is clearly a case of ellipsis induced in speech for reasons of economy llke contraction and clltlcizatlon. However, English grammar does not allow subjectless tensed clauses.In fact, it is this prohibition that explains the existence of expletive it in English, a feature completely absent from lang~ges with subJectless sentences.Of course, subject deletion in speech is highly constrained and its occurrence can be accommodated in a parser without completely rewriting the grammar of English, and we have done so.The point here, as with all these examples, is that close study of the syntax of speech repays the effort with improvements in coverage.The final sort of non-standard input that we will mention is the uncorrected true error.In our analysis of 40 or more hours of spoken interview material we have found true errors to be rare. They generally occur when people express complex ideas that they have not talked about before and they involve changing direction in the middle of a sentence.An example of this sort of mistake is given in (8), where the object of a prepositional phrase turns into the subject of a following clause:(8) When I was able to understand the explanation of the moves of the chessmen started to make sense to me, he became interested. Large parts of sentences with errors llke this are parsable, but the whole may not make sense. Clearly, a natural language system should be able to make whatever sense can be made out of such strings even if it cannot construct an overall structure for them. Having done as well as it can, the system must then rely on context, just as a human interlocutor would. Unlike vernacular speech, the writing of unskilled writers quite commonly displays errors.One case, which we have studied in detail is that of errors in relative clauses with "pied-plped" prepositional phrases. We often find clauses like the ones in (9), where the wrong preposition (usually in) appears at the beginning of the clause. (9) a. methods in which to communicate with other people b. rules in which people can direct their efforts Since pied-plped relatives are non-existent in NV, the simplest explanation for such examples is that they are errors due to imperfect learning of the standard language rule. More precisely, instead of moving a wh-prepositional phrase to the complementlzer position in the relative clause, unskilled writers may analyze the phrase in which as a general oblique relativizer equivalent to where, the form most commonly used in this function in informal speech.In summary, ordinary linguistic usage exhibits numerous deviations from the standard written language.The sources of these deviations are diverse and they are of varying significance for natural language processing. It is safe to say, however, that an accurate assessment of their nature, frequency and effect on interpretability is a necessary prerequisite to the development of truly robust systems. | null | null | null | null | Main paper:
:
There are many ways in which the input to a natural language system can be non-standard without being uninterpretable ~ Most obviously, such input can be the well-formed output of a grammar other than the standard language grammar with which the interpreter is likely to be equipped. This difference of grammar is presumably what we notice in language that we call "non-standard" in everyday life.Obviously, at least from the perspective of a linguist, it is wrong to think of this difference as being due to errors made by the non-standard language user; it is simply a dialect difference. Secondly, the non-standard input can contain hesitations and self-correctlons which make the string uninterpretable unless some parts of it are edited out. This is the normal state of affairs in spoken language so that any system designed to understand spoken communication, even at a rudimentary level must be able to edit its input as well as interpret it. Thirdly, the input may be ungrammatical even by the rules of the grammar of the speaker but be the expected output of the speaker's sentence generating device.This case has not been much discussed, but it is important because in certain environments speakers (and to some extent unskilled writers) regularly produce ungrammmatical output in preference to grammatically unimpeachable alternatives.Finally, the input t~at the system receives may simply contain uncorrected errors.How important this last source of non-standard input would be in a functioning system is hard to judge and wouldThe discussion in this paper is based an on-going study of the syntactic differences between written and of spoken language funded by the National Institute of Education under grants G78-0169 and G80-0163.depend on the environment of use. Uncorrected errors are, in our experience, reasonably rare in fluent speech but they are more common in unskilled writing.These errors may be typographical, a case we shall ignore in this discussion, or they may be grammatical.Of most interest to us are the cases where the error is due to a language user attempting to use a standard language construction that he/she does not natively command.In the course of this brief communication we shall discuss each of the above cases with examples, drawing on work we have done describing the differences between the syntax of vernacular speech and of standard writing (Kroch and Nindle, 1981) .Our work indicates that these differences are sizable enough to cause problems for the acquisition of writing as a skill, and they may arise'as well when natural language understanding systems come to be used by a wider public. Whether problems will indeed arise is, of course, hard to say as it depends on so many factors.The most important of these is whether natural language systems are ever used with oral, as well as typed-in, language. We do not know whether the features of speech that we will be outlining will also show up in "keyboard" language; for its special characteristics have been little studied from a linguistic point of view (for a recent attempt see Thompson 1980) . They will certainly occur more sporadically and at a lower incidence than they do in speech;and there may be new features of "keyboard" language that are not predictable from other language modes.We shall have little to say about how the problem of non-standard input can be best handled in a working system; for solving that problem will require more research.If we can give researchers working on natural language systems a clearer idea of what their devices are likely to have to cope with in an environment of widespread public use, our remarks will have achieved their purpose.Informal. generally spoken, English exists in a number of regional, class and ethnic varieties, each with its own grammatical peculiarities. Fortunately, the syntax of these dialects is somewhat less varied than the phonology so that we may reasonably approximate the situation by speaking of a general "non-standard vernacular (NV)", which contrasts in numerous ways with standard written English (SWE).Some of the differences between the two dialects can lead to problems for parsing and interpretation.Thus, subject-verb agreement, which is categorical in SWE, is variable in NV. In fact, in some environments subject-verb agreement is rarely indicated in NV, the most notable being sentences with dummy there subjects.Thus, the first of the sentences in (i) is the more likely in NV while, of course, only the second can occur in SWE:(I) a. There was two girls on the sofa. b. There were two girls on the sofa. Since singular number is the unmarked alternative, it occurs with both singular and plural subjects; hence only plural marking on a verb can be treated as a clear signal of number in NV. This could easily prove a problem for parsers that use number marking to help find subject-verb pairs. A further, perhaps more difficult, problem would be posed by another feature of NV, the deletion of relative clause ¢omplementizers on subject relatives.SWE does not allow sentences like those in (2); but they are the most likely form in many varieties of NV and occur quite freely in the speech of people whose speech is otherwise standard:(2) a. Anybody says it is a liar. b. There was a car used to drive by here. Here a parser that assumes that the first tensed verb following an NP that agrees with it is the main verb, will be misled.There are severe constraints on the environments in which subject relatives can appear without a complementizer, apparently to prevent hearers from "garden-pathing" on this construction, but these restrictions are not statable in a purely structural way.A final example of a NV construction which differs from what SWE allows is the use of it for expletive there, as in (3): --(3)It was somebody standing on the corner, This construction is categorical in black English, but it occurs with considerable frequency in the speech of whites as well, at least in Philadelphia, the only location on which we have data. This last example poses no problems in principle for a natural language system; it is simply a grammatical fact of NV that has to be incorporated into the grammar implemented by the natural language understanding system.There are many features like this, each trivial in itself but nonetheless a productive feature of the language.and false starts are a consistent feature of spoken language and any interpreter that -cannot handle them will fail instantly.In one count we found that 52% of the sentences in a 90 minute conversational interview contained at least one instance (Hindle, i981b). Fortunately, the deformation of grammaticality caused by self-correction induced disfluency is quite limited and predictable (Labov, 1966) . With a small set of editing rules, therefore, we have been able to normalize more than 95% of such disfluencies in preprocessing texts for input to a parser for spoken language that we have been constructing (Hindle, 1981b) .These rules are based on the fact that false starts in speech are phonetically signaled, often by truncation of the final syllable.Marking the truncation and other phonetic editing signals in our transcripts, we find that a simple procedure which removes the minimum number of words necessary to create a parsable sequence eliminates most ill-formedness.The spoken language contains as a normal part of its syntactic repertoire constructions like those illustrated below:(4) The problem is is that nobody understands me. (5) That's the only thing he does is fight. (6) John was the only guest who we weren't sure whether he would come. (7) Didn't have to worry about us. These are constructions that it is difficult to accomodate in a linguistically motivated syntax for obvious reasons. Sentence (4) has two tensed verbs; (5), which has been called a "portmanteau construction", has a constituent belonging simultaneously to two different sentences; (6) has a wh-movement construction with no trace (see the discussion in Kroch, 1981) ; and (7) violates the absolute grammatical requirement that English sentences have surface subjects.We do not know why these forms occur so regularly in speech, but we do know that they are extremely common. The reasons undoubtedly vary from construction to construction.Thus, (5) has the effect of removing a heavy NP from surface subject position while preserving its semantic role as subject.Since we know that heavy NPs in subject position are greatly disfavored in speech (Kroch and Hindle, 1981) , the portmanteau construction is almost certainly performing a useful function in simplifying syntactic processing or the presentation of information.Similarly, relative clauses with resumptlve pronouns, like the one in (6), seem to reflect limitations on the sentence planning mechanism used in speech.If a relative clause is begun without computing its complete syntactic analysis, as a procedure like the one in MacDonald (1980) suggests, then a resumptlve pronoun might be used to fill a gap that turned out to occur in a non-deletable position. This account explains why resumptlve pronouns do not occur in writing. They are ungrammatical and the real-tlme constraints on sentence planning that cause speech to be produced on the basis of limited look-ahead are absent. Subject deletion, illustrated in 7, is clearly a case of ellipsis induced in speech for reasons of economy llke contraction and clltlcizatlon. However, English grammar does not allow subjectless tensed clauses.In fact, it is this prohibition that explains the existence of expletive it in English, a feature completely absent from lang~ges with subJectless sentences.Of course, subject deletion in speech is highly constrained and its occurrence can be accommodated in a parser without completely rewriting the grammar of English, and we have done so.The point here, as with all these examples, is that close study of the syntax of speech repays the effort with improvements in coverage.The final sort of non-standard input that we will mention is the uncorrected true error.In our analysis of 40 or more hours of spoken interview material we have found true errors to be rare. They generally occur when people express complex ideas that they have not talked about before and they involve changing direction in the middle of a sentence.An example of this sort of mistake is given in (8), where the object of a prepositional phrase turns into the subject of a following clause:(8) When I was able to understand the explanation of the moves of the chessmen started to make sense to me, he became interested. Large parts of sentences with errors llke this are parsable, but the whole may not make sense. Clearly, a natural language system should be able to make whatever sense can be made out of such strings even if it cannot construct an overall structure for them. Having done as well as it can, the system must then rely on context, just as a human interlocutor would. Unlike vernacular speech, the writing of unskilled writers quite commonly displays errors.One case, which we have studied in detail is that of errors in relative clauses with "pied-plped" prepositional phrases. We often find clauses like the ones in (9), where the wrong preposition (usually in) appears at the beginning of the clause. (9) a. methods in which to communicate with other people b. rules in which people can direct their efforts Since pied-plped relatives are non-existent in NV, the simplest explanation for such examples is that they are errors due to imperfect learning of the standard language rule. More precisely, instead of moving a wh-prepositional phrase to the complementlzer position in the relative clause, unskilled writers may analyze the phrase in which as a general oblique relativizer equivalent to where, the form most commonly used in this function in informal speech.In summary, ordinary linguistic usage exhibits numerous deviations from the standard written language.The sources of these deviations are diverse and they are of varying significance for natural language processing. It is safe to say, however, that an accurate assessment of their nature, frequency and effect on interpretability is a necessary prerequisite to the development of truly robust systems.
Appendix:
| null | null | null | null | {
"paperhash": [
"thompson|linguistic_analysis_of_natural_language_communication_with_computers",
"mcdonald|natural_language_production_as_a_process_of_decision-making_under_constraints"
],
"title": [
"Linguistic Analysis of Natural Language Communication With Computers",
"Natural language production as a process of decision-making under constraints"
],
"abstract": [
"Interaction with computers in natural \nlanguage requires a language that is flexible \nand suited to the task. This study of natural \ndialogue was undertaken to reveal those characteristics \nwhich can make computer English more \nnatural. Experiments were made in three modes \nof communication: face-to-face, terminal-to-terminal \nand human-to-computer, involving over \n80 subjects, over 80,000 words and over 50 \nhours. They showed some striking similarities, \nespecially in sentence length and proportion of \nwords in sentences. The three modes also share \nthe use of fragments, typical of dialogue. \nDetailed statistical analysis and comparisons \nare given. The nature and relative frequency of \nfragments, which have been classified into \ntwelve categories, is shown in all modes. Special \ncharacteristics of the face-to-face mode \nare due largely to these fragments (which \ninclude phatics employed to keep the channel of \ncommunication open). Special characteristics of \nthe computational mode include other fragments, \nnamely definitions, which are absent from other \nmodes. Inclusion of fragments in computational \ngrammar is considered a major factor in improving \ncomputer naturalness. \n \nThe majority of experiments involved a real \nlife task of loading Navy cargo ships. The \npeculiarities of face-to-face mode were similar \nin this task to results of earlier experiments \ninvolving another task. It was found that in \ntask oriented situations the syntax of interactions \nis influenced in all modes by this context \nin the direction of simplification, resulting in \nshort sentences (about 7 words long). Users \nseek to maximize efficiency In solving the problem. \nWhen given a chance, in the computational \nmode, to utilize special devices facilitating \nthe solution of the problem, they all resort to \nthem. \n \nAnalyses of the special characteristics of \nthe computational mode, including the analysis \nof the subjects\" errors, provide guidance for \nthe improvement of the habitability of such systems. \nThe availability of the REL System, a \nhigh performance natural language system, made \nthe experiments possible and meaningful. The \nindicated improvements in habitability are now \nbeing embodied in the POL (Problem Oriented \nLanguage) System, a successor to REL.",
"1,102,701. Locating conductors. TATEISI ELECTRONICS CO. June 16, 1965 [June 24, 1964], No. 25467/65. Heading G1N. To compensate for the effect of supply voltage fluctuations on an electromagnetic detector, the output amplifier has a D. C. reference voltage varying with the mains supply. As shown the sensing head comprises a primary 2 and opposed secondaries 3, 4 (for detecting a conductor 9). The output is applied through an amplifier 14 to a common emitter trigger comprising transistors 17, 18 to operate a switching circuit 23. The switching circuit and transistors are energized from constant voltage supplies, but the emitter \"reference\" bias is derived from the current through a resistor 28 in series with a Zener diode 29 and hence varies with the A. C. supply to the sensing head."
],
"authors": [
{
"name": [
"B. H. Thompson"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"David D. McDonald"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
}
]
}
],
"arxiv_id": [
null,
null
],
"s2_corpus_id": [
"1010309",
"45464479"
],
"intents": [
[
"background"
],
[]
],
"isInfluential": [
false,
false
]
} | - Problem: The paper addresses the need for natural language understanding systems to incorporate features of spoken vernacular language to effectively interpret non-standard input, including non-standard grammatical rules, hesitations, false starts, systematic errors, and uncorrected true errors.
- Solution: The paper proposes that by understanding and incorporating these features of spoken vernacular language, natural language understanding systems can better handle non-standard input and improve their ability to interpret spoken communication. | 512 | 0.015625 | null | null | null | null | null | null | null | null |
56aa9a578717ba03a6a5fa2b893ca04cad251cc3 | 31006279 | null | Our Double Anniversary | In June of 1952, ten years before the founding of the Association, the first meeting ever held on computational linguistics took place. This meeting, the succeeding ten years, and the first year of the Association are discussed. Some thoughts are offered as to what the future may bring. | {
"name": [
"Yngve, Victor H."
],
"affiliation": [
null
]
} | null | null | 20th Annual Meeting of the Association for Computational Linguistics | 1982-06-01 | 0 | 2 | null | When the suggestion came from Don Walker to celebrate our twentieth anniversary by a panel discussion I responded with enthusiasm at the opportunlty for us all to reminisce. Much has happened in those twenty years to look back on, and there have been many changes: Not many here will remember that founding meeting.As our thoughts go back to the beginnings it must also be with a note of sadness, for some of our most illustrious early members can no longer be counted among the living. Not many of you will remember either that our meeting here today marks another anniversary of signal importance for this Association. Thirty years ago the first organized conference ever to be held in the field of computational linguistics took place. The coincidence of the dates is remarkable. This conference is on June [16] [17] [18] 1982 , that one was on June 17-20, 1952, overlapping two of our three dates. That meeting was the M.I.T. Conference on Mechanical Translation. It was an international meeting organized by ¥. Bar-Hillsl and held at the M.I.L faculty club. If our association was born twenty years ago, this was the moment of its conception, exactly thirty years ago. I will try to recall that meeting for you, as best I can, for I propose that we celebrate that anniversary as well.For that very first meeting Bar-Hillel had brought together eighteen interested people from both coasts and from En~In~d.The first session was an evening session open to the public. It consisted of five short semi-popular talks. The real business of the meeting took place the next three days in closed sessions in a pleasant room overlooking the Charles River. We sat around a kind of rectangular round-table, listened to fifteen prepared papers or presentations, and discus-sed them with a no-holds-barred give-and-take catalyzed by the intense, open, and candidly outspoken personality of Bar-Hillel. He was the only person I ever knew who could argue with you, shouting excitedly at the top of his lungs until your back was literally against the wall, and always with that angelic smile on his face and you couldn't help llklng him through it all.The stenotype transcript of the dlsousslon at that first meetlng makes interesting reading even today. The participants grappled in a preliminary but often insightful way with difficult issues many of which are still with u~ The ten years between the first conference and the founding of the Association were marked by many newsworthy events and considerable technical progress. A number of individuals and groups entered the field, both here and abroad, and an adequate level of support materialized, mostly from government agencies.This important contribution to progress in our field should be a matter of pride to the agencies involved.It was an essential ingredient in the mix of efforts that have put us where we are today.Progress in that first ten years can be estimated by considering that up to the time of the founding of the Association the journal ~~publlshed 52 articles, 187 abstracts of the llterature, and ran to 532 pages.To review all of that research adequately would be a large task, and one that I will not undertake here. But I should like to say that it includes a number of cases where computer techniques have played an essential role in linguistic research.Just one example is the work on the depth hypothesis during the summer of 1959, which owes everything to the heuristic advantages of computer modeling in linguistics. Those linguists who still scorn or ignore computational linguistics should consider carefully those many examples of the efflcaoy of computer methods in their dlsoipllne.Toward the end of those ten years the need for a professional society became clear. We did keep in touch byphone and letter, and ad hoc committees had been formed for various purposes.But most of all we needed a formal organization to bring a degree of order into the process of planning meetings.We could make plans through our informal contacts, but there was always the problem that new groups or existing organizations would go ahead with plans of their own for meetings too soon before or after our own. There were also requests from sponsoring agencies for symposia reviewing progress and encouraging cooperation between the growing number of federally supported projects. We wanted regular meetings but we resisted the idea of having too many.As an example of the situation we faced, I received aletter early in 1959 from the Association for Computing Machinery, who were planning a National Conference to be held at M.I.~ September 1-3, 1959. They asked me if I thought that people connected with mechanical translation would llke to have a session at the meeting or meet concurrently. I said I didn't know, but agreed to write to some people in the field about it. I did write, offering to set up a session or a separate meeting if others wanted me to do it, but expressing the thought that there were very few of us doing research in the field and that there now were a number of organizations that would llke to include mechanical translation papers in their programs to build interest and attendance. It was a hot topic at the time.We did not take up the ACM in their kind offer. Had we done so, we might today be a Special Interest Group of the AC~l, and that would have hindered our close ties to linguistics.In any event, the people at UCLA organized a National Symposium on Machine Translation, which took place on February 2-5, 1960, Just five months after the date of the ACM meeting, and five months after that, on July 18-22, 1950, a meeting of federally sponsored machine translation workers, organized by Harry Josselson and supported by NSF and ONR was held at the Princeton Inn, Princeton, New Jersey. The next year, on April q-7, 1961, a similar conference was held st Georgetown Univer-sity, and Just five months after that, on September Oettinger, and Sydney M. Lamb were members of the NominatlngCommlttee. Our announced purpose was to encourage high professional standards by aponsoring meetings, publication, and other exchange of ln/ormation.It was to provide a means of doing together what individuals cannot do alone.Many of us had hoped for a truly international association. We felt this would be particularly appropriate for an organization involved in trying to improve the means for international communication through mechanical translation.But the cost of travel, travel restrictions from some countries, and various other practical problems stood in the way.We became an international but predominantly American association.We decided from the beginning to meet in alternate years in conjunction with a major computer conference and a major linguisticsMy year of tenure as President was uneventful, or so it seemS.It is difficult to extract one year of memories twenty years ago. I do remember a trip to Denver to see about arrangements for our first annual meetlng at the Denver Hilton, to take place August 25 and 26, 1963, the two days immediately preceding the ACM National Conference. The local arrangements people for that meeting were most helpful.The program was put together by Harry Josselson. There were thirty-four papers covering a wide variety of topics including syntactic analysis, semantics, particulars of languages, theoretical linguistics, research procedures, and research techniques.Abstracts for the thirty four papers were published in~~, Yol. 7, No. 2, and a group photograph of some of the delegates attending appeared in Vol. 8, No. I. Looking at this photograph and those taken atearlier conferences and published in earlier issues invokes considerable nostalgia for those days.I do remember my presidential address, for it stressed some matters that I thought were particu-larly important for the future. These thoughts were also embodied in a longer paper read to the American Philosophical Society three months later, in November 1963, and published the next year by that organization.I should like to quote a few sentences for they are particularly appropriate at this point:• A new field of research has grown up which revolves about languages, computers, and symbolic processes.This sometimes is called computational llnguistlcs, mechanical linguistics, information processing, symbol manipulation, and so on. None of the names are really adequate. The implications Of this research for the future are far-reachlng. Imagine what it would mean if we bad computer programs that could actually understand English. Besides the obvious practical implications, the implications for our understanding of language are most exciting. This research promises to give us new insights into the way in which languages convey information, the way in which people understand English, the nature of thought processes, the nature of our theories, ideas, and prejudices, and eventually a deeper understanding of ourselves. Perhaps one of the last frontiers of man~s understanding of his environment is his understandlr~of man and his mental processes."This new field touches, with various degrees of overlap and interaction, the already well-established diverse fields of linguistics, psychology, logic, philosophy, information theory, circuit theory, and computer design. The interaction with linguistics has already produced several small revolutions in methodology, point of view, insight into language, and standards of rigor and exactness. It appears that before we are done, linguistics will be completely revolutionized."This quotation is particularly apt because I still believe that before we are done linguistics will be completely revolutionized. Let me explain. First, the difficulties in mechanlzlng translation had already at that early date called attention to fundamental inadequacies in linguistic theory, traditional or transformational, it makes no difference.Second, the depth hypothesis and the problems raised in trying to square it with current linguistic theory threw further doubt on the scientific integrity of linguistics.And third, the depth hypothesis also provided an important clue as to how the Inadequacies in linguistic theory might eventually be overcome. I have spent the last two decades or so following this lead and trying to find a more satisfactory foundation for linguistics. The following is a brief progress report to the parent body, as it were.A recent written report may be found in the Janua~Series Major volume 97, edited by Florian Coulmas.Modern scientific linguistics, since its be-Elnnlng a century and a half ago, has been characterized by three central goals (1) that it study language, (2) that it be scientific, and (3) that it seek explanations in terms of people. It turns out that these goals are contradictory and mutually incompatible, and this is the underlying reason for the most serious Inadequacies in linguistic theor~ Linguistics, and that includes computational linguistics, is faced with two mutually exclusive alternatives. We can either accept the first goal and study language by the methods of grammar, or we can accept the second and third goals and seek explanations of communicative phenomena in terms of people by the methods ofsclence.We cannot continue with business a usual and try to have it both ways. Basically this is because science studies real objects given in advance whereas grammar studies objects that are only created by a point of vlew, as Saussure realized. Their study rests on a special assumption that places grammar outside of science. To try to have it both ways also leads to the fallacies of the psychologlcal and social reality of grammar.The full implications of this fork in the road that linguistics faces is Just now sinking in. Only the second alternative is viable, science rather than grammar. This means we will have to give up the two thousand year grammatical tradition at the core of linguistic thought and reconstruct the discipline on well-known scientific principles instead.This will open up vast opportunities for research to uncover that essential and unique part of human nature, how people communicate. We may then finally be able to do all those things we have been trying so hard to do.In this necesaary reconstruction I foresee that computational linguistics is destined to play an essential role. | null | null | null | null | Main paper:
i the early years:
When the suggestion came from Don Walker to celebrate our twentieth anniversary by a panel discussion I responded with enthusiasm at the opportunlty for us all to reminisce. Much has happened in those twenty years to look back on, and there have been many changes: Not many here will remember that founding meeting.As our thoughts go back to the beginnings it must also be with a note of sadness, for some of our most illustrious early members can no longer be counted among the living. Not many of you will remember either that our meeting here today marks another anniversary of signal importance for this Association. Thirty years ago the first organized conference ever to be held in the field of computational linguistics took place. The coincidence of the dates is remarkable. This conference is on June [16] [17] [18] 1982 , that one was on June 17-20, 1952, overlapping two of our three dates. That meeting was the M.I.T. Conference on Mechanical Translation. It was an international meeting organized by ¥. Bar-Hillsl and held at the M.I.L faculty club. If our association was born twenty years ago, this was the moment of its conception, exactly thirty years ago. I will try to recall that meeting for you, as best I can, for I propose that we celebrate that anniversary as well.For that very first meeting Bar-Hillel had brought together eighteen interested people from both coasts and from En~In~d.The first session was an evening session open to the public. It consisted of five short semi-popular talks. The real business of the meeting took place the next three days in closed sessions in a pleasant room overlooking the Charles River. We sat around a kind of rectangular round-table, listened to fifteen prepared papers or presentations, and discus-sed them with a no-holds-barred give-and-take catalyzed by the intense, open, and candidly outspoken personality of Bar-Hillel. He was the only person I ever knew who could argue with you, shouting excitedly at the top of his lungs until your back was literally against the wall, and always with that angelic smile on his face and you couldn't help llklng him through it all.The stenotype transcript of the dlsousslon at that first meetlng makes interesting reading even today. The participants grappled in a preliminary but often insightful way with difficult issues many of which are still with u~ The ten years between the first conference and the founding of the Association were marked by many newsworthy events and considerable technical progress. A number of individuals and groups entered the field, both here and abroad, and an adequate level of support materialized, mostly from government agencies.This important contribution to progress in our field should be a matter of pride to the agencies involved.It was an essential ingredient in the mix of efforts that have put us where we are today.Progress in that first ten years can be estimated by considering that up to the time of the founding of the Association the journal ~~publlshed 52 articles, 187 abstracts of the llterature, and ran to 532 pages.To review all of that research adequately would be a large task, and one that I will not undertake here. But I should like to say that it includes a number of cases where computer techniques have played an essential role in linguistic research.Just one example is the work on the depth hypothesis during the summer of 1959, which owes everything to the heuristic advantages of computer modeling in linguistics. Those linguists who still scorn or ignore computational linguistics should consider carefully those many examples of the efflcaoy of computer methods in their dlsoipllne.Toward the end of those ten years the need for a professional society became clear. We did keep in touch byphone and letter, and ad hoc committees had been formed for various purposes.But most of all we needed a formal organization to bring a degree of order into the process of planning meetings.We could make plans through our informal contacts, but there was always the problem that new groups or existing organizations would go ahead with plans of their own for meetings too soon before or after our own. There were also requests from sponsoring agencies for symposia reviewing progress and encouraging cooperation between the growing number of federally supported projects. We wanted regular meetings but we resisted the idea of having too many.As an example of the situation we faced, I received aletter early in 1959 from the Association for Computing Machinery, who were planning a National Conference to be held at M.I.~ September 1-3, 1959. They asked me if I thought that people connected with mechanical translation would llke to have a session at the meeting or meet concurrently. I said I didn't know, but agreed to write to some people in the field about it. I did write, offering to set up a session or a separate meeting if others wanted me to do it, but expressing the thought that there were very few of us doing research in the field and that there now were a number of organizations that would llke to include mechanical translation papers in their programs to build interest and attendance. It was a hot topic at the time.We did not take up the ACM in their kind offer. Had we done so, we might today be a Special Interest Group of the AC~l, and that would have hindered our close ties to linguistics.In any event, the people at UCLA organized a National Symposium on Machine Translation, which took place on February 2-5, 1960, Just five months after the date of the ACM meeting, and five months after that, on July 18-22, 1950, a meeting of federally sponsored machine translation workers, organized by Harry Josselson and supported by NSF and ONR was held at the Princeton Inn, Princeton, New Jersey. The next year, on April q-7, 1961, a similar conference was held st Georgetown Univer-sity, and Just five months after that, on September Oettinger, and Sydney M. Lamb were members of the NominatlngCommlttee. Our announced purpose was to encourage high professional standards by aponsoring meetings, publication, and other exchange of ln/ormation.It was to provide a means of doing together what individuals cannot do alone.Many of us had hoped for a truly international association. We felt this would be particularly appropriate for an organization involved in trying to improve the means for international communication through mechanical translation.But the cost of travel, travel restrictions from some countries, and various other practical problems stood in the way.We became an international but predominantly American association.We decided from the beginning to meet in alternate years in conjunction with a major computer conference and a major linguisticsMy year of tenure as President was uneventful, or so it seemS.It is difficult to extract one year of memories twenty years ago. I do remember a trip to Denver to see about arrangements for our first annual meetlng at the Denver Hilton, to take place August 25 and 26, 1963, the two days immediately preceding the ACM National Conference. The local arrangements people for that meeting were most helpful.The program was put together by Harry Josselson. There were thirty-four papers covering a wide variety of topics including syntactic analysis, semantics, particulars of languages, theoretical linguistics, research procedures, and research techniques.Abstracts for the thirty four papers were published in~~, Yol. 7, No. 2, and a group photograph of some of the delegates attending appeared in Vol. 8, No. I. Looking at this photograph and those taken atearlier conferences and published in earlier issues invokes considerable nostalgia for those days.I do remember my presidential address, for it stressed some matters that I thought were particu-larly important for the future. These thoughts were also embodied in a longer paper read to the American Philosophical Society three months later, in November 1963, and published the next year by that organization.I should like to quote a few sentences for they are particularly appropriate at this point:• A new field of research has grown up which revolves about languages, computers, and symbolic processes.This sometimes is called computational llnguistlcs, mechanical linguistics, information processing, symbol manipulation, and so on. None of the names are really adequate. The implications Of this research for the future are far-reachlng. Imagine what it would mean if we bad computer programs that could actually understand English. Besides the obvious practical implications, the implications for our understanding of language are most exciting. This research promises to give us new insights into the way in which languages convey information, the way in which people understand English, the nature of thought processes, the nature of our theories, ideas, and prejudices, and eventually a deeper understanding of ourselves. Perhaps one of the last frontiers of man~s understanding of his environment is his understandlr~of man and his mental processes."This new field touches, with various degrees of overlap and interaction, the already well-established diverse fields of linguistics, psychology, logic, philosophy, information theory, circuit theory, and computer design. The interaction with linguistics has already produced several small revolutions in methodology, point of view, insight into language, and standards of rigor and exactness. It appears that before we are done, linguistics will be completely revolutionized."This quotation is particularly apt because I still believe that before we are done linguistics will be completely revolutionized. Let me explain. First, the difficulties in mechanlzlng translation had already at that early date called attention to fundamental inadequacies in linguistic theory, traditional or transformational, it makes no difference.Second, the depth hypothesis and the problems raised in trying to square it with current linguistic theory threw further doubt on the scientific integrity of linguistics.And third, the depth hypothesis also provided an important clue as to how the Inadequacies in linguistic theory might eventually be overcome. I have spent the last two decades or so following this lead and trying to find a more satisfactory foundation for linguistics. The following is a brief progress report to the parent body, as it were.A recent written report may be found in the Janua~Series Major volume 97, edited by Florian Coulmas.Modern scientific linguistics, since its be-Elnnlng a century and a half ago, has been characterized by three central goals (1) that it study language, (2) that it be scientific, and (3) that it seek explanations in terms of people. It turns out that these goals are contradictory and mutually incompatible, and this is the underlying reason for the most serious Inadequacies in linguistic theor~ Linguistics, and that includes computational linguistics, is faced with two mutually exclusive alternatives. We can either accept the first goal and study language by the methods of grammar, or we can accept the second and third goals and seek explanations of communicative phenomena in terms of people by the methods ofsclence.We cannot continue with business a usual and try to have it both ways. Basically this is because science studies real objects given in advance whereas grammar studies objects that are only created by a point of vlew, as Saussure realized. Their study rests on a special assumption that places grammar outside of science. To try to have it both ways also leads to the fallacies of the psychologlcal and social reality of grammar.The full implications of this fork in the road that linguistics faces is Just now sinking in. Only the second alternative is viable, science rather than grammar. This means we will have to give up the two thousand year grammatical tradition at the core of linguistic thought and reconstruct the discipline on well-known scientific principles instead.This will open up vast opportunities for research to uncover that essential and unique part of human nature, how people communicate. We may then finally be able to do all those things we have been trying so hard to do.In this necesaary reconstruction I foresee that computational linguistics is destined to play an essential role.
Appendix:
| null | null | null | null | {
"paperhash": [],
"title": [],
"abstract": [],
"authors": [],
"arxiv_id": [],
"s2_corpus_id": [],
"intents": [],
"isInfluential": []
} | null | 512 | 0.003906 | null | null | null | null | null | null | null | null |
7f5fa55a6b3910d0ea87e51410c1a20d72df5e43 | 6216649 | null | Ill-Formed and Non-Standard Language Problems | Prospects look good for making real improvements in Natural Language Processing systems with regard to dealing with unconventional inputs in a practical way. Research which is expected to have an influence on this progress as well as some predictions about accomplishments in both the short and long term are discussed. | {
"name": [
"Kwasny, Stan"
],
"affiliation": [
null
]
} | null | null | 20th Annual Meeting of the Association for Computational Linguistics | 1982-06-01 | 13 | 2 | null | Natural Language Understanding systems which permit language in expected forms in anticipated environments having a well-defined semantics is in many ways a solved problem with today's technology.Unfortunately, few interesting situations in which Natural Language is useful live up to this description.Even a modicum of machine intelligence is not pcsslble, we believe, without continuing the pursuit for more sophisticated models which deal with such problems and which degrade gracefully (see Hayes and Reddy, 1979) .Language as spoken (or typed) breaks the "rules".Every study substantiates this fact. Malhotra (1975) discovered this in his studies of live subjects in designing a system to support decision-making activities.An extensive investigation by Thompson (1980) provides further evidence that providing a grammar of "standard English"does not go far enough in meeting the prospective needs of the user.Studies by Fromkin an~ her co-workers (1980), likewise, provide new insights into the range of errors that can occur in the use of language in various situations. Studies of this sort are essential in identifying the nature of such non-standard usages.But more than merely anticipating user inputs is required. Grammaticality is a continuum phenomenon with many dimensions.So is intelligibility.In hearing language used in a strange way, we often pass off the variation as dialectic, or we might unconsciously correct an errorful utterance.Occasionally, we might not understand or even misunderstand.What are the rules (zetarules, etc.) under which we operate in doing this? Can introspection be trusted to provide the proper ~erspecCives?The results of at least one investigator argue against the use of intuitions in discovering these rules (Spencer, 1973) .Computational linguists must continue to conduct studies and consider the results of studies conducted by others.exist which may give insights on the problem.We present some of these, not to pretend to exhaustively summarize them, but to hopefully stimulate interest among researchers to pursue one or more of these views of what is needed.Certain telegraphic forms of language occur in situations where two or more speakers of different languages must communicate.A pidgin form of language develops which borrows features from each of the languages.Characteristically, it has limited vocabulary and lacks several grammatical devices (like number and gender, for example) and exhibits a reduced number of redundant features. This phenomenon can similarly he observed in some styles of man-machine dialogue.Once the user achieves some success in conversing with the machine, whether the conversation is being conducted in Natural Language or not, there is a tendency to continue to use those forms and words which were previously handled correctly. The result is a type of pidginization between the machine dialect and the user dialect which exhibits pidgin-like characteristics: limited vocabulary, limited use of some grammatical devices, etc.It is therefore reasonable to study these forms of language and to attempt to accomodate them in some natural way within our language models. Woods (1977) points out that the use of Natural Language: "... does not preclude the introduction of abbreviations and telegraphic shorthands for complex or high frequency concepts --the ability of natural English to accommodate such abbreviations is one of its strengths." (p.18) Specialized sublanguages can often be identified which enhance the quality of the communication and prove to be quite convenient especially to frequent users.Conjunction is an extremely common and yet poorly understood phenomenon. The wide variety of ways in which sentence fragments may be joined argues against any approach which attempts to account for conjunction within the same set of rules used in processing other sentences. Also, constituents being joined are often fragments, rather than complete sentences, and, therefore, any serious attempt to address the problem of con-Junction must necessarily investigate ellipsis as well.Since conjunction-handling involves ellipsis-handling, techniques which treat nonstandard linguistic forms must explicate both.What approaches work well in such situtations? Once a non-standard language form has been identified, the rules of the language processing component could simply be expanded to accomodate that new form. But that approach has limitations and misses the general phenomenon in most cases. Dejong (1979) demonstrated that wire service stories could be "skimmed" for prescribed concepts without much regard to gramn~aticality or acceptability issues.Instead, as long as coherency existed among the individual concepts, the overall content of the story could be summarized. The whole problem of addressing what to do with nonstandard inputs was finessed because of the context.Techniques based on meta-rules have been explored by various researchers. Kwasny (1980) investigated specialized techniques for dealing with cooccurrence violations, ellipsis~ and conjunction within an ATN gra~mlar. Sondheimer and Weischedel (1981) have generalized and refined this approach by making the meta-rules more explicit and by designing strategies which manipulate the rules of the grammar using meta-rules.Other systems have taken the approach that the user should play a major role in exercising choices about the interpretations proposed by the system.With such feedback to the user, no timeconsuming actions are performed without his approval. This approach works well in database retrieval tasks.In the short term, we must look to what we understand and know about the language phenomena and apply those techniques that appear promising. Non-standard language forms appear as errors in the expected processing paths.One of the functions of a style-checking program (for example the EPISTLE system by Miller et al., 1981) is to detect and, in some cases, correct certain types of errors made by the author of a document. Since such programs are expected to become more of a necessary part of any author support system, a great deal of research can be expected to be directed at that problem.A great deal of research which deals with errors in language inputs comes from attempts to process continuous speech (see, for example, Bates, 1976) . The techniques associate with nonleft-to-right processing strategies should prove useful in narrowing the number of legal alternatives to be attempted when identifying and correcting some types of error. It is quite conceivable that an approach to this problem that parallels the work on speech understanding would be very fruitful. Note that this does not involve inventing new methods, but rather borrows from related studies.The primary impediment, at the moment, to this approach, as with some of the other approaches mentioned, is the time involved in considering viable alternatives.As these problems are reduced over the next few years, I feel that we should see Natural Language systems with greatly improved communication abilities.In the long term, some form of language learning capability will be critical. Both rules and meta-rules will need to be modifiable.The system behavior will need to improve and adapt to the user over time. User models of style and preferred forms as well as common mistakes will be developed as a necessary part of such systems. As speed increases, more opportunity will be available for creative architectures such as was seen in the speech projects, but which still respond within a reasonable time frame.Finally, formal studies of user responses will need to be conducted in an ongoing fashion to assure that the systems we build conform to user needs.Bates, M., "Syntax in Automatic Speech Understanding," A~JournalofComoutational Lingu~s-J~, Microfiche 45, 1976.DeJong, G.F., "Skimming Stories in Real Time: An Experiment in Integrated Understanding," Technical Report 158, Yale University, Computer Science Department, 1979.Fromkin, V.A., ed., Errors in Linguistic perfgr-man99: SliPs of the Tongue, Ear, Pen, and Hand, Academic Press, New York, 1980.Hayes, P.J., and R. Reddy, "An Anatomy of Graceful Interaction in Spoken and Written Man-Machine Communication," Technical Report, Carnegie-Mellon University, August, 1979. | null | null | null | null | Main paper:
developing:
Natural Language Understanding systems which permit language in expected forms in anticipated environments having a well-defined semantics is in many ways a solved problem with today's technology.Unfortunately, few interesting situations in which Natural Language is useful live up to this description.Even a modicum of machine intelligence is not pcsslble, we believe, without continuing the pursuit for more sophisticated models which deal with such problems and which degrade gracefully (see Hayes and Reddy, 1979) .Language as spoken (or typed) breaks the "rules".Every study substantiates this fact. Malhotra (1975) discovered this in his studies of live subjects in designing a system to support decision-making activities.An extensive investigation by Thompson (1980) provides further evidence that providing a grammar of "standard English"does not go far enough in meeting the prospective needs of the user.Studies by Fromkin an~ her co-workers (1980), likewise, provide new insights into the range of errors that can occur in the use of language in various situations. Studies of this sort are essential in identifying the nature of such non-standard usages.But more than merely anticipating user inputs is required. Grammaticality is a continuum phenomenon with many dimensions.So is intelligibility.In hearing language used in a strange way, we often pass off the variation as dialectic, or we might unconsciously correct an errorful utterance.Occasionally, we might not understand or even misunderstand.What are the rules (zetarules, etc.) under which we operate in doing this? Can introspection be trusted to provide the proper ~erspecCives?The results of at least one investigator argue against the use of intuitions in discovering these rules (Spencer, 1973) .Computational linguists must continue to conduct studies and consider the results of studies conducted by others.exist which may give insights on the problem.We present some of these, not to pretend to exhaustively summarize them, but to hopefully stimulate interest among researchers to pursue one or more of these views of what is needed.Certain telegraphic forms of language occur in situations where two or more speakers of different languages must communicate.A pidgin form of language develops which borrows features from each of the languages.Characteristically, it has limited vocabulary and lacks several grammatical devices (like number and gender, for example) and exhibits a reduced number of redundant features. This phenomenon can similarly he observed in some styles of man-machine dialogue.Once the user achieves some success in conversing with the machine, whether the conversation is being conducted in Natural Language or not, there is a tendency to continue to use those forms and words which were previously handled correctly. The result is a type of pidginization between the machine dialect and the user dialect which exhibits pidgin-like characteristics: limited vocabulary, limited use of some grammatical devices, etc.It is therefore reasonable to study these forms of language and to attempt to accomodate them in some natural way within our language models. Woods (1977) points out that the use of Natural Language: "... does not preclude the introduction of abbreviations and telegraphic shorthands for complex or high frequency concepts --the ability of natural English to accommodate such abbreviations is one of its strengths." (p.18) Specialized sublanguages can often be identified which enhance the quality of the communication and prove to be quite convenient especially to frequent users.Conjunction is an extremely common and yet poorly understood phenomenon. The wide variety of ways in which sentence fragments may be joined argues against any approach which attempts to account for conjunction within the same set of rules used in processing other sentences. Also, constituents being joined are often fragments, rather than complete sentences, and, therefore, any serious attempt to address the problem of con-Junction must necessarily investigate ellipsis as well.Since conjunction-handling involves ellipsis-handling, techniques which treat nonstandard linguistic forms must explicate both.What approaches work well in such situtations? Once a non-standard language form has been identified, the rules of the language processing component could simply be expanded to accomodate that new form. But that approach has limitations and misses the general phenomenon in most cases. Dejong (1979) demonstrated that wire service stories could be "skimmed" for prescribed concepts without much regard to gramn~aticality or acceptability issues.Instead, as long as coherency existed among the individual concepts, the overall content of the story could be summarized. The whole problem of addressing what to do with nonstandard inputs was finessed because of the context.Techniques based on meta-rules have been explored by various researchers. Kwasny (1980) investigated specialized techniques for dealing with cooccurrence violations, ellipsis~ and conjunction within an ATN gra~mlar. Sondheimer and Weischedel (1981) have generalized and refined this approach by making the meta-rules more explicit and by designing strategies which manipulate the rules of the grammar using meta-rules.Other systems have taken the approach that the user should play a major role in exercising choices about the interpretations proposed by the system.With such feedback to the user, no timeconsuming actions are performed without his approval. This approach works well in database retrieval tasks.In the short term, we must look to what we understand and know about the language phenomena and apply those techniques that appear promising. Non-standard language forms appear as errors in the expected processing paths.One of the functions of a style-checking program (for example the EPISTLE system by Miller et al., 1981) is to detect and, in some cases, correct certain types of errors made by the author of a document. Since such programs are expected to become more of a necessary part of any author support system, a great deal of research can be expected to be directed at that problem.A great deal of research which deals with errors in language inputs comes from attempts to process continuous speech (see, for example, Bates, 1976) . The techniques associate with nonleft-to-right processing strategies should prove useful in narrowing the number of legal alternatives to be attempted when identifying and correcting some types of error. It is quite conceivable that an approach to this problem that parallels the work on speech understanding would be very fruitful. Note that this does not involve inventing new methods, but rather borrows from related studies.The primary impediment, at the moment, to this approach, as with some of the other approaches mentioned, is the time involved in considering viable alternatives.As these problems are reduced over the next few years, I feel that we should see Natural Language systems with greatly improved communication abilities.In the long term, some form of language learning capability will be critical. Both rules and meta-rules will need to be modifiable.The system behavior will need to improve and adapt to the user over time. User models of style and preferred forms as well as common mistakes will be developed as a necessary part of such systems. As speed increases, more opportunity will be available for creative architectures such as was seen in the speech projects, but which still respond within a reasonable time frame.Finally, formal studies of user responses will need to be conducted in an ongoing fashion to assure that the systems we build conform to user needs.Bates, M., "Syntax in Automatic Speech Understanding," A~JournalofComoutational Lingu~s-J~, Microfiche 45, 1976.DeJong, G.F., "Skimming Stories in Real Time: An Experiment in Integrated Understanding," Technical Report 158, Yale University, Computer Science Department, 1979.Fromkin, V.A., ed., Errors in Linguistic perfgr-man99: SliPs of the Tongue, Ear, Pen, and Hand, Academic Press, New York, 1980.Hayes, P.J., and R. Reddy, "An Anatomy of Graceful Interaction in Spoken and Written Man-Machine Communication," Technical Report, Carnegie-Mellon University, August, 1979.
Appendix:
| null | null | null | null | {
"paperhash": [
"miller|text-critiquing_with_the_epistle_system:_an_author's_aid_to_better_syntax",
"kwasny|relaxation_techniques_for_parsing_grammatically_ill-formed_input_in_natural_language_understanding_systems",
"thompson|linguistic_analysis_of_natural_language_communication_with_computers",
"woods|a_personal_view_of_natural_language_understanding",
"kwasny|treatment_of_ungrammatical_and_extra-grammatical_phenomena_in_natural_language_understanding_systems",
"dejong|skimming_stories_in_real_time:_an_experiment_in_integrated_understanding.",
"hayes|an_anatomy_of_graceful_interaction_in_spoken_and_written_man-machine_communication",
"malhotra|design_criteria_for_a_knowledge-based_english_language_system_for_management_:_an_experimental_analysis"
],
"title": [
"Text-critiquing with the EPISTLE system: an author's aid to better syntax",
"Relaxation Techniques for Parsing Grammatically Ill-Formed Input in Natural Language Understanding Systems",
"Linguistic Analysis of Natural Language Communication With Computers",
"A personal view of natural language understanding",
"Treatment of ungrammatical and extra-grammatical phenomena in natural language understanding systems",
"Skimming stories in real time: an experiment in integrated understanding.",
"An anatomy of graceful interaction in spoken and written man-machine communication",
"Design criteria for a knowledge-based English language system for management : an experimental analysis"
],
"abstract": [
"The experimental EPISTLE system is ultimately intended to provide office workers with intelligent applications for the processing of natural language text, particularly business correspondence. A variety of possible critiques of textual material are identified in this paper, but the discussion focuses on the system's capability to detect several classes of grammatical errors, such as disagreement in number between the subject and the verb. The system's error-detection performance relies critically on its parsing component which determines the syntactic structure of each sentence and the grammatical functions fulfilled by various phrases. Details of the system's operations are provided, and some of the future critiquing objectives are outlined.",
"This paper investigates several language phenomena either considered deviant by linguistic standards or insufficiently addressed by existing approaches. These include co-occurrence violations, some forms of ellipsis and extraneous forms, and conjunction. Relaxation techniques for their treatment in Natural Language Understanding Systems are discussed. These techniques, developed within the Augmented Transition Network (ATN) model, are shown to be adequate to handle many of these cases.",
"Interaction with computers in natural \nlanguage requires a language that is flexible \nand suited to the task. This study of natural \ndialogue was undertaken to reveal those characteristics \nwhich can make computer English more \nnatural. Experiments were made in three modes \nof communication: face-to-face, terminal-to-terminal \nand human-to-computer, involving over \n80 subjects, over 80,000 words and over 50 \nhours. They showed some striking similarities, \nespecially in sentence length and proportion of \nwords in sentences. The three modes also share \nthe use of fragments, typical of dialogue. \nDetailed statistical analysis and comparisons \nare given. The nature and relative frequency of \nfragments, which have been classified into \ntwelve categories, is shown in all modes. Special \ncharacteristics of the face-to-face mode \nare due largely to these fragments (which \ninclude phatics employed to keep the channel of \ncommunication open). Special characteristics of \nthe computational mode include other fragments, \nnamely definitions, which are absent from other \nmodes. Inclusion of fragments in computational \ngrammar is considered a major factor in improving \ncomputer naturalness. \n \nThe majority of experiments involved a real \nlife task of loading Navy cargo ships. The \npeculiarities of face-to-face mode were similar \nin this task to results of earlier experiments \ninvolving another task. It was found that in \ntask oriented situations the syntax of interactions \nis influenced in all modes by this context \nin the direction of simplification, resulting in \nshort sentences (about 7 words long). Users \nseek to maximize efficiency In solving the problem. \nWhen given a chance, in the computational \nmode, to utilize special devices facilitating \nthe solution of the problem, they all resort to \nthem. \n \nAnalyses of the special characteristics of \nthe computational mode, including the analysis \nof the subjects\" errors, provide guidance for \nthe improvement of the habitability of such systems. \nThe availability of the REL System, a \nhigh performance natural language system, made \nthe experiments possible and meaningful. The \nindicated improvements in habitability are now \nbeing embodied in the POL (Problem Oriented \nLanguage) System, a successor to REL.",
"For many years, I have been pursuing a long-range research objective in the area of natural language understanding for man-machine communication, an objective that I share with many of my colleagues. The objective is to develop the capability for people to interact directly in fluent natural language with a computer system for support of some decision making task they are involved in. Specifically, I am concerned with the use of natural language to manipulate data retrieval and display capabilities to enable a decision-maker to obtain a grasp of an overall situation in the face of an overwhelming availability of low level data. Such a system must give concise answers to specific high-level questions posed by the decision-maker within a small number of seconds in most cases.",
"3. When a map, drawing or chart, etc., is part of the material being photo graphed the photographer has followed a definite method in “sectioning” the material. It is customary to begin Aiming at the upper left hand comer of a large sheet and to continue from left to right in equal sections with small overlaps. If necessary, sectioning is continued again-beginning below the Arst row and continuing on until complete.",
"Abstract : This dissertation describes a new method of automated text analysis. FRUMP (Fast Reading Understanding and Memory Program) is a working natural language processing system that has been implemented to demonstrate the viability of this new approach. The system skims news stories directly from the United Press International news wire and produces a summary of what it understands. FRUMP is able to correctly process news articles it has never before seen. (Author)",
"The r e have recent ly been a number of attempts to provide natural and flexible inter faces to computer systems through the medium of natural language. While such interfaces typ ica l ly pe r f o rm wel l in response to straightforward requests and questions within their domain of d i scourse , they often fail to interact gracefully with their users in less pred ictab le c i rcumstances. Most current systems cannot, for instance: respond reasonably to input not con fo rming to a rigid grammar; ask for and understand clarification if their user's input is unclear; o f fe r clarif ication of their own output if the user asks for it; or interact to reso lve any ambiguit ies that may arise when the user attempts to describe things to the system. We be l ieve that graceful interaction in these and the many other contingencies that can ar i se in human conversat ion is essential if interfaces are ever to appear cooperat ive and he lp fu l , and hence be suitable for the casual or naive user, and more habitable for the expe r i enced user. In this paper; we attempt to circumscribe graceful interaction as a f ie ld for s tudy , and ident i fy the problems involved in achieving it. T o this end we decompose graceful interaction into a number of relatively independent ski l ls: skil ls involved in parsing elliptical, fragmented, and otherwise ungrammatical input; in ensu r ing robust communication; in explaining abilities and limitations, actions and the motives beh ind them; in keeping track of the focus of attention of a dialogue; in identifying things f rom descr ipt ions, even if ambiguous or unsatisfiable; and in describing things in terms app rop r i a te for the context. We claim these skills are necessary for any type of gracefu l in teract ion and sufficient for graceful interaction in a certain large class of appl icat ion domains. None of these components is individually much beyond the current state of the art, and w e outl ine the architecture of a system that integrates them all. Thus, we p ropose g race fu l interact ion as an idea of great practical utility whose time has come and wh ich is r i pe for implementation. We are currently implementing a gracefully interacting system along the l ines presented; the system will initially deal with typed input, but is eventual ly intended to accept natural speech.",
"Thesis (Ph. D.)--Massachusetts Institute of Technology, Alfred P. Sloan School of Management, 1975."
],
"authors": [
{
"name": [
"L. A. Miller",
"George E. Heidorn",
"Karen Jensen"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"S. Kwasny",
"N. Sondheimer"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"B. H. Thompson"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"W. Woods"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"S. Kwasny"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"Ii Gerald Francis Dejong"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"P. Hayes",
"D. R. Reddy"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"A. Malhotra"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
}
]
}
],
"arxiv_id": [
null,
null,
null,
null,
null,
null,
null,
null
],
"s2_corpus_id": [
"17922808",
"181820",
"1010309",
"20002644",
"59707086",
"60953693",
"7385382",
"60706096"
],
"intents": [
[
"background"
],
[],
[
"background"
],
[
"background"
],
[
"methodology"
],
[
"background"
],
[
"background"
],
[
"background"
]
],
"isInfluential": [
false,
false,
false,
false,
false,
false,
false,
false
]
} | - Problem: The paper addresses the challenge of developing Natural Language Processing systems that can effectively handle unconventional inputs in a practical manner, acknowledging the limitations of current technology in dealing with non-standard language forms.
- Solution: The paper proposes that by continuing to pursue more sophisticated models that can gracefully handle unconventional language forms, such as pidgin-like characteristics and conjunction variations, and by exploring techniques like meta-rules and user feedback, improvements can be made in Natural Language Processing systems in both the short and long term. | 512 | 0.003906 | null | null | null | null | null | null | null | null |
8ae878dce20d81ee65d6f81cea153a5a069b078b | 17737088 | null | {E}nglish Words and Data Bases: How to Bridge the Gap | If a q.a. system tries to transform an English question directly into the simplest possible formulation of the corresponding data base query, discrepancies between the English lexicon and the structure of the data base cannot be handled well. To be able to deal with such discrepancies in a systematic way, the PHLIQAI system distinguishes different levels of semantic representation; it contains modules which translate from one level to another, as well as a module which simplifies expressions within one level. The paper shows how this approach takes care of some phenomena which would be problematic in a more simple-minded set-up. | {
"name": [
"Scha, Remko J. H."
],
"affiliation": [
null
]
} | null | null | 20th Annual Meeting of the Association for Computational Linguistics | 1982-06-01 | 4 | 5 | null | If a question-answering system is to cover a non-trivial fragment of its natural input-language, and to allow for an arbitrarily structured data base, it cannot assume that the syntactic/semantic structure of an input question has much in common with the formal query which would formulate in terms of the actual data base structure what the desired information is. An important decision in the design of a q.a. system is therefore, how to embody in the system the necessary knowledge about the relation between English words and data base notions.Most existing programs, however, do not face this issue. They accept considerable constraints on both the input language and the possible data base structures, so as to be able to establish a fairly direct correspondence between the lexical items of the input language and the primitives of the data base, which makes it possible to translate input questions into query expressions in a rather straightforward fashion.In designing the PHLIQAI system, bridging the gap between free English input and an equally unconstrained data base structure was one of the main goals. In order to deal with this problem in a systematic way, different levels of semantic analysis are distinguished in the PHLIQAI program. At each of these levels, the meaning of the input question is represented by an expression of a formal logical language. The levels differ in that each of them assumes different semantic primitives.At the highest of these levels,the meaning of the question is represented by an expression of the English-oriented Formal Language (EFL); this language uses semantic primitives which correspond to the descriptive lexical items of English. The prim-itives of the lowest semantic level are the primitives of the data base (names of files, attributes, data-items). The formal language used at this level is therefore called the Data Base Language (DBL). Between EFL and DBL, several other levels of meaning representation are used as intermediary steps. Because of the space limitations imposed on the present paper, I am forced to evoke a somewhat misleading picture of the PHLIQA set-up, by ignoring these intermediate levels.Given the distinctions just introduced, the problem raised by the discrepancy between the English lexicon and the set of primitives of a given data base can be formulated as follows: one must devise a formal characterization of the relation between EFL and DBL, and use this characterization for an effective procedure which translates EFL queries into DBL queries. I will introduce PHLIQA's solution to this problem by giving a detailed discussion of some examples I which display complications that Robert Moore suggested as topics for the panel discussion at this conference.The highest level of semantic representation is independent of the subject-domain. It contains a semantic primitive for every descriptive lexical item of the input-language 2. The semantic types of these primitives are systematically related to the syntactic categories of the corresponding lexical items. For example, for every noun there is a constant which denotes the set of individuals which fall under the description of this noun: corresponding to "employee" and "employees" there is a constant EMPLOYEES denoting the set of all employees, corresponding to "department" and "departments" there is a constant DEPARTMENTS denoting the set of all departments. Corresponding to an n-place verb there is an n-place predicate. For instance, "to have" corresponds to the 2-place predicate HAVE. Thus, the input analysis component. . . . . . . . . . . . . . . . . . . . . . . . .I There is no space for a definition of the logical formalism I use in this paper. Closely related logical languages are defined in Scha (1976) , Landsbergen and Scha (1979) , and Bronnenberg et a1.(1980) . of the system translates the question "How many departments have more than i00 employees ?" (i) into Count({x E DEPARTMENTS I Count({y e EMPLOYEESIHAVE(x,y)}) > I00}). 2III THE DATA BASE ORIENTED LEVEL OF MEANING REPRESENTATION A data base specifies an interpretation of a logical language, by specifying the extension of every constant. A formalization of this view on data bases, an& its application to a CODASYL data base, can be found in Bronnenberg et ai.(1980) . The idea is equally applicable to relational data bases.A relational data base specifies an interpretation of a logical language which contains for every relation R [K, At, .... An] a constant K denoting a set, and n functions Al,..., An which have the denotation of K as their domain. ~ Thus, if we have an EMPLOYEE file with a DEPARTMENT field, this file specifies the extension of a set EMPS and of a function DEPT which has the denotation of EMPS as its domain. In terms of such a data base structure, (i) above may be formulated as Count({xe (for: EMPS, apply: DEPT) 1Count((y e EMPSIDEPT(y)=x}) > i00}).(3) I pointed out before that it would be unwise to design a system which would directly assign the meaning (3) to the question (I). A more sensible strategy is to first assign (I) the meaning (2). The formula (3), or a logically equivalent dne, may then be derived on the basis of a specification of the relation between the English word meanings used in (i) and the primitive concepts at the data base level.Though we defined EFL and DBL independently of each other (one on the basis of the possible English questions about the subject-domain, the other on the basis of the structure of the data base about it) there must be a relation between them. The data base contains information which can serve to answer queries formulated in EFL. This means that the denotation of certain EFL expressions is fixed if an interpretation of DBL is given.We now consider how the relation between EFL and DBL may be formulated in such a way that it can easily serve as a basis for an effective translation from EFL expressions into DBL expressions. The most general formulation would take the form of a set of axioms, expressed in a logical language encompassing both EFL and DBL. If we allow the full generality of that approach, however, it leads to the use of algorithms which are not efficient and which are not guaranteed to terminate. An alternative formulation, which is attractive because it can easily be implemented by effective procedures, is one in terms of translation rules. This is the approach adopted in the PHLIQAI system. It is described in detail in Bronnenberg et al. (1980) and can be summarized as follows.The relation between subsequent semantic levels can be described by means of local translation rules which specify, for every descriptive constant of the source language, a corresponding expression of the target language I • A set of such translation rules defines for every source language query-expression an equivalent target language expresslono An effective algorithm can be constructed which performs this equivalence translation for any arbitrary expression.A translation algorithm which applies the translation rules in a straightforward fashion, often produces large expressions which allow for considerably simpler paraphrases. As we will see later on in this paper, it may be essential that such simplifications are actually performed. Therefore, the result of the EFL-to-DBL translation is processed by a module which applies logical equivalence transformations in order ~o simplify the expression.At the most global level of description, the PHLIQA system can thus be thought to consist of the following sequence of components: Input analysis, yielding an EFL expression; EFL-to-DBL translation! simplification of the DBL expression; evaluation of the resulting expression.For the example introduced in the sections II and III, a specification of the EFL-to-DBL translation rules might look llke this: DEPARTMENTS ~ (for: EMPS, apply: DEPT) EMPLOYEES ÷ EMPS HAVE ÷ (%x,y: DEPT(y)=x) These rules can be directly applied to the formula (2). Substitution of the right hand expressions for the corresponding left hand constants in (2), followed by X-reduction, yields (3).It is easy to imagine a different data base which would also contain sufficient information to answer question (i). One example would be a data base which has a file of DEPARTMENTS, and which has NUMBER-OF-EMPLOYEES as an attribute of this fileo This data base specifies an interpretation of a logical language which contains the set-constant DEPTS and the function #EMP (from departments to integers) as its descriptive constants. In terms of this data base, the query expressed by (i) would be:Count (~x e DEPTSI #EMP (x) > i00}).If we try to describe the relation between EFL and DBL for this case, we face a difficulty which dld not arise for the data base structure of section III: the DBL constants do not allow the construction of DBL expressions whose denotations involve employees. So the EFL constant EMPLOYEES cannot be translated into an equivalent DBL expression -nor can the relation HAVE, for lack of a suitable domain. This may seem to force us to give up local translation for certain cases: instead, we would have to design an algorithm which looks out for sub-expressions of the form I ignore the complexities which arise because of the typing of variables, if a many-sorted logic is used. Again, see Bronnenberget al. (1980) , for details.(%y: Count( {x EEMPLOYEES IHAVE(y,x)} )), where y is ranging over DEPARTMENTS, and then translates this whole expression into: #~. This is not attractive -it could only work if EFL expressions would be first transformed so as to always contain this expression in exactly this form, or if we would have an algorithm for recognizing all its variants.Fortunately, there is another solution. Though in DBL terms one cannot talk about employees, one can talk about objects which stand in a one-to-one correspondence to the employees: the pairs consisting of a department d and a positive integer i such that i is not larger than than the value of #E~ for d. Entities which have a one-to-one correspondence with these pairs, and are disjoint with the extensions of all other semantic types, may be used as "proxies" for employees. Thus, we may define the following translation: EMPLOYEES ~ U(for: DEPTS, apply: ( is a functionwhich establishes a oneem -to-one correspondence between its domain and its range (its range is disjoint with all other semantic types); rid is the inverse of id ; INTS is a emp function which assigns to any integer i the set of integers j such that 0<j~i.Application of these rules to 2 It is clear that this data base, because of its greater "distance" to the English lexicon, requires a more extensive set of simplification rules if the DBL query produced by the translation rules is to be transformed into its simplest possible form. A simplification algorithm dealing succesfully with complexities of the kind just illustrated was implemented by W.J. Bronnenberg as a component of the PHLIQAI system.Consider a slight variation on question (I): "How many departments have more than i00 people ?" (7~) We may want to treat "people" and "e~!oyees" as non-synonymous. For instance, we may want to be able to answer the question "Are all employees employed by a department ?" with "Yes", but "Are all people employed by a department ?" with "I don't know". Nevertheless, (7) can be given a definite answer on the basis of the data base of section IlL The method as described so far hasaproblem with this example: although the answer to (7) is determined by the data base, the question as formulated refers to entities which are not represented in the data base, cannot be constructed out of such entities, and do not stand in a one-to-one correspondence with entities which can be so constructed. In order to be able to construct a DBL translation of (7) by means of local substitution rules of the kind previously illustrated, we need an extended version of DBL, which we will call DBL*, containing the same constants as DBL plus a constant NONEMPS, denoting the set of persons who are not employees. Now, local translation rules for the EFL-to-DBL* translation may be specified. Application of these translation rules to the EFL representation of (7) yields a DBL* expression containing the unevaluable constant NONEMPS. The system can only give a definite answer if this constant is eliminated by the simplification component.If the elimination does not succeed, PHLIQA still gives a meaningful "conditional answer". It translates NONEMPS into ~ and prefaces the answer with "if there are no people other than employees, ...". Again, see Bronnenberg et al. (1980) for details.Some attractive properties of the translation method are probably clear from the examples. Local translation rules can be applied effectively and have to be evoked only when they are directly relevant. Using the techniques of introducing "proxies" (section V) and "complementary constants" (section VI) in DBL, a considerable distance between the English lexicon and the data base structure can be covered by means of local translation rules.The problem of simplifying the DBL* expression (and other, intermediate expressions, in the full version of the PHLIQA method) can be treated separately from the peculiarities of particular data bases and particular constructions of the input language.In previous papers it has been pointed out that this idea, taken strictly, leads not to an ordinary logical language, but requires a formal language which is ambiguous. I ignore this aspect here. What I call EFL corresponds to what was called EFL-in some other papers. SeeLandsbergenandScha (1979) andBronnenberg et al. (1980) for discussion. | null | null | null | null | Main paper:
introduction:
If a question-answering system is to cover a non-trivial fragment of its natural input-language, and to allow for an arbitrarily structured data base, it cannot assume that the syntactic/semantic structure of an input question has much in common with the formal query which would formulate in terms of the actual data base structure what the desired information is. An important decision in the design of a q.a. system is therefore, how to embody in the system the necessary knowledge about the relation between English words and data base notions.Most existing programs, however, do not face this issue. They accept considerable constraints on both the input language and the possible data base structures, so as to be able to establish a fairly direct correspondence between the lexical items of the input language and the primitives of the data base, which makes it possible to translate input questions into query expressions in a rather straightforward fashion.In designing the PHLIQAI system, bridging the gap between free English input and an equally unconstrained data base structure was one of the main goals. In order to deal with this problem in a systematic way, different levels of semantic analysis are distinguished in the PHLIQAI program. At each of these levels, the meaning of the input question is represented by an expression of a formal logical language. The levels differ in that each of them assumes different semantic primitives.At the highest of these levels,the meaning of the question is represented by an expression of the English-oriented Formal Language (EFL); this language uses semantic primitives which correspond to the descriptive lexical items of English. The prim-itives of the lowest semantic level are the primitives of the data base (names of files, attributes, data-items). The formal language used at this level is therefore called the Data Base Language (DBL). Between EFL and DBL, several other levels of meaning representation are used as intermediary steps. Because of the space limitations imposed on the present paper, I am forced to evoke a somewhat misleading picture of the PHLIQA set-up, by ignoring these intermediate levels.Given the distinctions just introduced, the problem raised by the discrepancy between the English lexicon and the set of primitives of a given data base can be formulated as follows: one must devise a formal characterization of the relation between EFL and DBL, and use this characterization for an effective procedure which translates EFL queries into DBL queries. I will introduce PHLIQA's solution to this problem by giving a detailed discussion of some examples I which display complications that Robert Moore suggested as topics for the panel discussion at this conference.The highest level of semantic representation is independent of the subject-domain. It contains a semantic primitive for every descriptive lexical item of the input-language 2. The semantic types of these primitives are systematically related to the syntactic categories of the corresponding lexical items. For example, for every noun there is a constant which denotes the set of individuals which fall under the description of this noun: corresponding to "employee" and "employees" there is a constant EMPLOYEES denoting the set of all employees, corresponding to "department" and "departments" there is a constant DEPARTMENTS denoting the set of all departments. Corresponding to an n-place verb there is an n-place predicate. For instance, "to have" corresponds to the 2-place predicate HAVE. Thus, the input analysis component. . . . . . . . . . . . . . . . . . . . . . . . .I There is no space for a definition of the logical formalism I use in this paper. Closely related logical languages are defined in Scha (1976) , Landsbergen and Scha (1979) , and Bronnenberg et a1.(1980) . of the system translates the question "How many departments have more than i00 employees ?" (i) into Count({x E DEPARTMENTS I Count({y e EMPLOYEESIHAVE(x,y)}) > I00}). 2III THE DATA BASE ORIENTED LEVEL OF MEANING REPRESENTATION A data base specifies an interpretation of a logical language, by specifying the extension of every constant. A formalization of this view on data bases, an& its application to a CODASYL data base, can be found in Bronnenberg et ai.(1980) . The idea is equally applicable to relational data bases.A relational data base specifies an interpretation of a logical language which contains for every relation R [K, At, .... An] a constant K denoting a set, and n functions Al,..., An which have the denotation of K as their domain. ~ Thus, if we have an EMPLOYEE file with a DEPARTMENT field, this file specifies the extension of a set EMPS and of a function DEPT which has the denotation of EMPS as its domain. In terms of such a data base structure, (i) above may be formulated as Count({xe (for: EMPS, apply: DEPT) 1Count((y e EMPSIDEPT(y)=x}) > i00}).(3) I pointed out before that it would be unwise to design a system which would directly assign the meaning (3) to the question (I). A more sensible strategy is to first assign (I) the meaning (2). The formula (3), or a logically equivalent dne, may then be derived on the basis of a specification of the relation between the English word meanings used in (i) and the primitive concepts at the data base level.Though we defined EFL and DBL independently of each other (one on the basis of the possible English questions about the subject-domain, the other on the basis of the structure of the data base about it) there must be a relation between them. The data base contains information which can serve to answer queries formulated in EFL. This means that the denotation of certain EFL expressions is fixed if an interpretation of DBL is given.We now consider how the relation between EFL and DBL may be formulated in such a way that it can easily serve as a basis for an effective translation from EFL expressions into DBL expressions. The most general formulation would take the form of a set of axioms, expressed in a logical language encompassing both EFL and DBL. If we allow the full generality of that approach, however, it leads to the use of algorithms which are not efficient and which are not guaranteed to terminate. An alternative formulation, which is attractive because it can easily be implemented by effective procedures, is one in terms of translation rules. This is the approach adopted in the PHLIQAI system. It is described in detail in Bronnenberg et al. (1980) and can be summarized as follows.The relation between subsequent semantic levels can be described by means of local translation rules which specify, for every descriptive constant of the source language, a corresponding expression of the target language I • A set of such translation rules defines for every source language query-expression an equivalent target language expresslono An effective algorithm can be constructed which performs this equivalence translation for any arbitrary expression.A translation algorithm which applies the translation rules in a straightforward fashion, often produces large expressions which allow for considerably simpler paraphrases. As we will see later on in this paper, it may be essential that such simplifications are actually performed. Therefore, the result of the EFL-to-DBL translation is processed by a module which applies logical equivalence transformations in order ~o simplify the expression.At the most global level of description, the PHLIQA system can thus be thought to consist of the following sequence of components: Input analysis, yielding an EFL expression; EFL-to-DBL translation! simplification of the DBL expression; evaluation of the resulting expression.For the example introduced in the sections II and III, a specification of the EFL-to-DBL translation rules might look llke this: DEPARTMENTS ~ (for: EMPS, apply: DEPT) EMPLOYEES ÷ EMPS HAVE ÷ (%x,y: DEPT(y)=x) These rules can be directly applied to the formula (2). Substitution of the right hand expressions for the corresponding left hand constants in (2), followed by X-reduction, yields (3).It is easy to imagine a different data base which would also contain sufficient information to answer question (i). One example would be a data base which has a file of DEPARTMENTS, and which has NUMBER-OF-EMPLOYEES as an attribute of this fileo This data base specifies an interpretation of a logical language which contains the set-constant DEPTS and the function #EMP (from departments to integers) as its descriptive constants. In terms of this data base, the query expressed by (i) would be:Count (~x e DEPTSI #EMP (x) > i00}).If we try to describe the relation between EFL and DBL for this case, we face a difficulty which dld not arise for the data base structure of section III: the DBL constants do not allow the construction of DBL expressions whose denotations involve employees. So the EFL constant EMPLOYEES cannot be translated into an equivalent DBL expression -nor can the relation HAVE, for lack of a suitable domain. This may seem to force us to give up local translation for certain cases: instead, we would have to design an algorithm which looks out for sub-expressions of the form I ignore the complexities which arise because of the typing of variables, if a many-sorted logic is used. Again, see Bronnenberget al. (1980) , for details.(%y: Count( {x EEMPLOYEES IHAVE(y,x)} )), where y is ranging over DEPARTMENTS, and then translates this whole expression into: #~. This is not attractive -it could only work if EFL expressions would be first transformed so as to always contain this expression in exactly this form, or if we would have an algorithm for recognizing all its variants.Fortunately, there is another solution. Though in DBL terms one cannot talk about employees, one can talk about objects which stand in a one-to-one correspondence to the employees: the pairs consisting of a department d and a positive integer i such that i is not larger than than the value of #E~ for d. Entities which have a one-to-one correspondence with these pairs, and are disjoint with the extensions of all other semantic types, may be used as "proxies" for employees. Thus, we may define the following translation: EMPLOYEES ~ U(for: DEPTS, apply: ( is a functionwhich establishes a oneem -to-one correspondence between its domain and its range (its range is disjoint with all other semantic types); rid is the inverse of id ; INTS is a emp function which assigns to any integer i the set of integers j such that 0<j~i.Application of these rules to 2 It is clear that this data base, because of its greater "distance" to the English lexicon, requires a more extensive set of simplification rules if the DBL query produced by the translation rules is to be transformed into its simplest possible form. A simplification algorithm dealing succesfully with complexities of the kind just illustrated was implemented by W.J. Bronnenberg as a component of the PHLIQAI system.Consider a slight variation on question (I): "How many departments have more than i00 people ?" (7~) We may want to treat "people" and "e~!oyees" as non-synonymous. For instance, we may want to be able to answer the question "Are all employees employed by a department ?" with "Yes", but "Are all people employed by a department ?" with "I don't know". Nevertheless, (7) can be given a definite answer on the basis of the data base of section IlL The method as described so far hasaproblem with this example: although the answer to (7) is determined by the data base, the question as formulated refers to entities which are not represented in the data base, cannot be constructed out of such entities, and do not stand in a one-to-one correspondence with entities which can be so constructed. In order to be able to construct a DBL translation of (7) by means of local substitution rules of the kind previously illustrated, we need an extended version of DBL, which we will call DBL*, containing the same constants as DBL plus a constant NONEMPS, denoting the set of persons who are not employees. Now, local translation rules for the EFL-to-DBL* translation may be specified. Application of these translation rules to the EFL representation of (7) yields a DBL* expression containing the unevaluable constant NONEMPS. The system can only give a definite answer if this constant is eliminated by the simplification component.If the elimination does not succeed, PHLIQA still gives a meaningful "conditional answer". It translates NONEMPS into ~ and prefaces the answer with "if there are no people other than employees, ...". Again, see Bronnenberg et al. (1980) for details.Some attractive properties of the translation method are probably clear from the examples. Local translation rules can be applied effectively and have to be evoked only when they are directly relevant. Using the techniques of introducing "proxies" (section V) and "complementary constants" (section VI) in DBL, a considerable distance between the English lexicon and the data base structure can be covered by means of local translation rules.The problem of simplifying the DBL* expression (and other, intermediate expressions, in the full version of the PHLIQA method) can be treated separately from the peculiarities of particular data bases and particular constructions of the input language.In previous papers it has been pointed out that this idea, taken strictly, leads not to an ordinary logical language, but requires a formal language which is ambiguous. I ignore this aspect here. What I call EFL corresponds to what was called EFL-in some other papers. SeeLandsbergenandScha (1979) andBronnenberg et al. (1980) for discussion.
Appendix:
| null | null | null | null | {
"paperhash": [],
"title": [],
"abstract": [],
"authors": [],
"arxiv_id": [],
"s2_corpus_id": [],
"intents": [],
"isInfluential": []
} | null | 512 | 0.009766 | null | null | null | null | null | null | null | null |
38b47811c83fbd6f710c64d03be4fdc1c0ce08c1 | 2166494 | null | Natural Language Database Updates | Although a great deal of research effort has been expended in support of natural language (NL) database querying, little effort has gone to NL database update. One reason for this state of affairs is that in NL querying, one can tie nouns and stative verbs in the query to database objects (relation names, attributes and domain values). In many cases this correspondence seems sufficient to interpret NL queries. NL update seems to require database counterparts for active verbs, such as "hire," "schedule" and "enroll," rather than for stative entities. There seem to be no natural candidates to fill this role. | {
"name": [
"Salveter, Sharon C. and",
"Maier, David"
],
"affiliation": [
null,
null
]
} | null | null | 20th Annual Meeting of the Association for Computational Linguistics | 1982-06-01 | 21 | 11 | null | null | null | Although a great deal of research effort has been expended in support of natural language (NL) database querying, little effort has gone to NL database update.One reason for this state of affairs is that in NL querying, one can tie nouns and stative verbs in the query to database objects (relation names, attributes and domain values). In many cases this correspondence seems sufficient to interpret NL queries.NL update seems to require database counterparts for active verbs, such as "hire," "schedule" and "enroll," rather than for stative entities.There seem to be no natural candidates to fill this role.We suggest a database counterpart for active verbs, which we call verbsraphs.The verbgraphs may be used to support NL update.A verbgraph is a structure for representing the various database changes that a given verb might describe.In addition to describing the variants of a verb, they may be used to disamblguate the update command.Other possible uses of verbgraphs include, specification of defaults, prompting of the user to guide but not dictate user interaction and enforcing a variety of types of database integrity constraints.We want to support natural language interface for all aspects of database manipulation. English and English-like query systems already exist, such as ROBOT[Ha77] , TQA[Da78] , LUNAR[W076] and those described by Kaplan[Ka79] , Walker[Wa78] and Waltz [Wz75] . We propose to extend natural language interac$ion to include data modification (insert, delete, modify) rather than simply data extraction. The desirability and unavailability of natural language database modification has been noted by Wiederhold, et al.[Wi81] .Database systems currently do not contain structures for explicit modelling of real world changes.A state of a database (OB) is meant to represent a state of a portion of the real world.This research is partially supported by NSF grants IST-79-18264 and ENG-79-07794.We refer to the abstract description of the portion of the real world being modelled as the semantic data descri~tlo n (SDD). A SDD indicates a set of real world states (RWS) of interest, a DB definition gives a set of allowable database states (DBS). The correspondence between the SDD and the DB definition induces connections between DB states and real world states.The situation is diagrammed in Natural language (NL) querying of the DB requires that the correspondence between the SDD and the DB definition be explicitly stated.The query system must translate a question phrased in terms of the SDD into a question phrased in terms of a data retrieval command in the language of the DB system.The response to the command must be translated back into terms of the SDD, which yields information about the real world state.For NL database modification, this stative correspondence between DB states and real world states is not adequate.We want changes in the real world to be reflected in the DB. In Figure 2 we see that when some action in the real world causes a state change from RWSI to RWS2, we must perform some modification to the DB to change its state from DBSI to DBS2.Databasef action D}IL RWS2 ~ DBS2 Figure 2We have a means to describe the action that changed the state of the real world: active verbs. We also have a means ~o describe a change in the DB state:data manipulation language (DML) command sequences.But given a real world-action, how do we find a O~XL command sequence that will agcomplish the corresponding change in the DB?Before we explore ways to represent his active correspondence--the connection between real world actions and DB updates--, let us examine how the stative correspondence is captured for use by a NL query system.We need to connect entities and relationships in the SDD with files, fields and field values in the DB. This stative correspondence between RWS and DBS is generally specified in a system file. For example, in Harris' ROBOT system, the semantic description is implici% and it is assumed to be given in English.The entities and relationships in the description are roughly English nouns and stative verbs. The correspondence of the SDD to the DB is given by a lexicon that associates English words with files, fields and field values in the DB. This lexicon also gives possible referents for word and phrases such as "who," "where" and "how much."Consider the following example.Suppose we have an office DB of employees and their scheduled meetings, reservations for meeting rooms and messages from one employee to another. We capture this information in the following four relations, Thus we can arrive at the query i__nnEMP, ROOMKESERVE retrieve name, phone where name = reserver and room = 85 and time = 2:45pm and date = CURRE~DATE Suppose we now want to make a change to the database: "Schedule Bob Marley for 2:lbpm Friday."This request could mean schedule a meeting with an individual or schedule Bob Marley for a seminar. We want to connect "schedule" with the insertion of a tuple in either APPOINTMENT or ROO~ESERVE. Although we may have pointers from "schedule" to APPOINTMENT and ROOMRESERVE, we do not have adequate information for choosing the relation to update.Although files, fields, domains and values seem to be adequate for expressing the stative correspondence, we have no similar DB objects to which we may tie verbs that describe actions in the real world.The best we can do with files, fields and domains is to indicate what is to be modified; we cannot specify how to make the modification.We need to connect the verbs "schedule," "hire" and "reserve" with some structures that dictate appropriate D:.~ sequences that perform the corresponding updates to the DB. The best we have is a specific D~ command sequence, a transaction, for each instance of "schedule" in the real world. No single transaction truly represents all the implications and variants of the "schedule" action. "Schedule" really corresponds to a set of similar transactions, or perhaps some parameterized version of a DB transaction.induced connections~/~~ DBS2 "Schedule"4.~Parameterized Transaction (PT)The desired situation is shown in Figure 3 . We hg" ~ an active correspondence between "schedule" anG a parameterized DB transaction PT. Oifferent instances of the schedule action, S1 and $2, cause differenL changes in the real worl~ s~a~. From the active correspondence of "schedule" and PT, we want to produce the proper transaction, T1 or T2, to effect the correct change in the DB state. There is not an existing candidate for the highlevel specification language for verb descriptions.We must be able to readily express the correspondence between actions in the semantic world and verb descriptions in this high-level specification We depend heavily on this correspondence to process natural language updates, just as the statlve correspondence is used to process natural language queries.In the next section we examine these requirements in more detail and offer, by example, one candidate for the representation.Another indication of the problem of active verbs in DB shows up in looking a semantic data languages.Sematnic data models are systems for constructing precise descriptions of protions of the real world -semantic data description (SDD)using terms that come from the real world rather than a particular DB system. A SDD is a starting point for designing and comparing particular DB implementations.Some of the semantic models that have been proposed are the entity-relationshipmodel[Ch763, SDM[~81], RM/T[Co793, TAXIS[MB80] and Beta[Br78].For some of these models, methodologies exist for translating to a DB specification in various DB models, as well as for expressing the static correspondence between a SDD in the semantic model and a particular DB implementation. To express actions in these models, however, there are only terms that refer to DBs:insert, delete, modify, rather than schedule, cancel, postpone (the notable exception is Skuce[SkSO] ).While there have been a number of approaches made to NL querying, there seems to be little work on NL update. Carbonell and Hayes[CHSl] have looked at parsing a limited set of NL update commands, but they do not say much about generating the DB transactions for these commands. Kaplan and Davidson[KDSl] have looked at the translation of NL updates to transactions, but the active verbs they deal with are synonyms for DB terms, essentially following the semantic data model as above.This limitation is intentional, as the following excerpt shows:First, it is assume that the underlying database update must be a series of transactions of the same type indicated in the request.That is, if the update requests a deletion, this can only be mapped into a series of deletions in the database.While some active verbs, such as "schedule," may correspond to a single type of DB update, there are other verbs that will require multiple types of DB updates, such as "cancel," which might require sending message as well as removing an appointment. ~apian and Davidson are also trying to be domain independent, while we are trying to exploit domain-specific information.We propose a structure, a verbgraph, to represent action verbs.Verbgraph are extensions of frame-like structures used to represent verb meaning in FDRAN[Sa78] and [Sa79] .One verbgraph is associated with each sense of a verb; that structure represents all variants.A real world change is described by a sentence that contains an active verb; the DB changes are accomplished by DML command sequences.A verbgraph is used to select DNfL sequences appropriate to process the variants of verb sense. We also wish to capture that one verb that may be used as part of another: we may have a verb sense RESERVE-ROOM that may be used by itself or may be used as a subpart of the verb SCHEDULE-TALK. Figure 4 is an example of verbgraph. It models the "schedule appointment" sense of the verb "schedule."There are four basic variants we are attempting to capture; they are distinguished by whether or not the appointment is scheduled with someone in the company and whether or not a meeting room is to be reserved.There is also the possibility that the supervisor must be notified of the meeting.The verbgraph is directed acyclic graph (DAG) with 5 kinds of nodes: header, footer, information, AND (0) and OR (o). Header is the source of the graph, the footer is the sink.Every information node has one incoming and outgoing edge. An AND or OR node can have any number of incoming or outgoing edges.A variant corresponds to a directed path in the graph.We define a path to be connected subgraph such that I) the header is included; 2) the footer is included; 3) if it contains an information node, it contains the incoming and outgoing edge; 4) if it contains an AND node, it contains all incoming and outgoing edges; and 5) if it contains an OR node, it contains exactly one incoming and one outgoing edge.We can think of tracing a path in the graph by starting at the header and following its outgoing edge. Whenever we encounter an information node, we go through it. Whenever we encounter an ~ND node, the path divides and follows all outgoing edges. We may only pass through an AND node if all its incoming edges have been followed. An OR node can be entered on only one edge and we leave it by any of its outgoing edges.An example of a complete path is one that consists of theheader, footer, information nodes, A, B, D, J, and connector nodes, a, b, c, d, g, k, i, n. Although there is a direction to paths, we do not intend that the order of nodes on a path implies any order of processing the graph, except the footer node is always last to be processed. A variant of a verb sense is described by the set of all expressions in the information nodes contained in a path.Expressions in the information nodes can be of two basic types: assignment and restriction. The assignment type produces a value to be used in the update, either by input or computation; the key word input indicates the value comes from the user.Some examples of assignment are: The user must provide a value from the domain personname.2) (node labelled D in Figure 4 ) RES.date ÷ APPT.dateThe value for ApPT.date is used as the value RES.date.An example of restriction is: (node B in Figure 4) APPT.who in R1 where R1 = in EMP retrieve nameThis statement restricts the value of APPT.who to be a company employee.Also in Figure 4 , the symbols RI, R2, R 3 and R 4 stand for the retrievals R I = i_~nEMP retrieve name R 2 = i_nn EMP retrieve office where name = ApPT.name R 3 = i_~n EMP retrieve office where name = APPT.name or name = APPT.who.R 4 = in ~MP retrieve supervisor where name = APPT.name.In Node B, INFORM(APPT.who, APPT.name, 'meeting with me on %APPT.date at %APPT.time') stands for another verbgraph that represents sending a message by inserting a tuple in MAILBOX. We can treat the INFORM verbgraph as a procedure by specifying values for all the slots that must be filled from input.The input slots for INFORM are (name, from, message).One use for the verbgraphs is in support of NL directed manipulation of the DB. in particular, they can aid in variant selection.We assume that the correct verb sense has already been selected; we discuss sense selection later.Our goal is to use information in the query and user responses to questions to identify a path in the verbgraph. Let us refer again to the verbgraph for SCHEDULE-APPOINTMENT shown in Figure 4 . Suppose the user command is "Schedule an appointment with James Parker on April 13" where James Parker is a company employee.Interaction with the verbgraph proceeds as follows.First, information is extracted from the command and classified by domain.For example, James Parker is in domain personname, which can only be used to instantiate APPT.name, APPT.who, APPT2.name and APPT2.who.However, since USER is a system variable, the only slots left are APPT.who and APPT2.name, Wblch are necessarily the same. Thus we can instantiate APPT.who and ApPT2.name with "James Parker." We classify "April 13" as a calendar date and instantiate APPT.date, APPT2.date and RES.date with it, because all these must be the same. No more useful information is in the query.Second, we examine the graph to see if a unique path has been determined. In this case it has not. However, other possibilities are constrained because we know the path must go through node B. This is because the path must go through either node B or node C and by analyzing the response to retrieval RI, we can determine it must be node B (i.e., James Parker is a company employee). Now we must determine the rest of the path.One determination yet to be made is whether or not node D is in the path. Because no room was mentioned in the query, we generate from the graph a question such as '";here will the appointment take place?" Suppose the answer is "my office." Presume we can translate "my office" into the scheduler's office number.This response has two effects. First, we know that no room has to be reserved, so node D is not in the path.Second, we can fill in APPT.where in node F. Finally, all that remains to be decided is if node H is on the path. A question like "Should we notify your supervisor?" is generated.Supposing the answer is "no." Now the path is completely determined; it contains nodes A, B and F. Now that we have determined a unique path in the graph, we discover that not all the information has been filled-in in every node on the path. We now ask the questions to complete these nodes, such as '~nat time?", "For how long?" and "~at is the topic?".At this point we have a complete unique path, so the appropriate calls to INFORM can be made and the parameterized transaction in the footer can be filled-in.Note that the above interaction was quite rigidly structured.In particular, after the user issues the original command, the verbgraph instantiation program chooses the order of the subsequent data entry.There is no provision for default, or optional values.Even if optional values were allowed, the program would have to ask questions for them anyway, since the user has no opportunity to specify them subsequent to the original command. We want the interaction to be more user-dlrected. Our general principle is to allow the user to volunteer additional information during the course of the interaction, as long as the path has not been determined and values remain unspecified.We use the following interaction protocol.The user enters the initial command and hits return. The program will accept additional lines of input. However, if the user just hits return, and the program needs more information, the program will generate a question.The user answers the question, followed by a return. As before, additional information may be entered on subsequent lines.If the user hits return on an empty line, another question is generated, if necessary.Brodle[Br813 and Skuce[Sk80] both present systems for representing DB change.Skuce's goal is to provide an English-like syntax for DB procedure specification.Procedures have a rigid format and require all information to be entered at time of invocation in a specific order, as with any computer subprogram.Brodie is attempting to also specify DB procedures for DB change.He allows some information to be specified later, but the order is fixed.Neither allow the user to choose the order of entry, and neither accomodates variants that would require different sets of values to be specified.However, like our method, and unlike Kaplan and Davidson[KD81] , they attempt to model DB changes that correspond to real world actions rather than just specifying English synonyms for single DB come, ands.Certain constraints on updates are implicit on verbgraphs, such as APPT.where ÷ input from R3, which constrains the location of the meeting to be the office of one of the two employees.We also use verbgraphs to maintain database consistency. Integrity constraints take two forms: constraints on a single state and constraints on successive database states.The second kind is harder to enforce; few systems support constraints on successive states.Verbgraphs provide many opportunities for specifying various defaults.First, we can specify default values, which may depend on other values. Second, we can specify default paths.Verbgraphs are also a means for specifying non-DB operations. For example, if an appointment is made with someone outside the company, generate a confirmation letter to be sent.All of the above discussion has assumed we are selecting a variant where the sense has already been determined.In general sense selection, being equivalent to the frame selection problem in Artifical Intelligence[CW76], is very difficult. We do feel that verbgraph will aid in sense selection, but will not be as efficacious as for variant selection.In such a situation, perhaps the English parser can help disambiguate or we may want to ask an appropriate question to select the correct sense, or as a last resort, provide menu selection,We are currently considering hierarchically structured transactions, as used in the TAXIS semantic model [MB80], as an alternative to verbgraphs.Verbgraphs can be ambiguous, and do not lend themselves to top-down design. Hierarchical transactions would seem to overcome both problems. Hierarchical transactions in TAXIS are not quite as versatile as verbgraphs in representing variants. The hierarchy is induced by hierarchies on the entity classes involved.Variants based on the relationship among particular entities, as recorded in the database, cannot be represented. For example, in the SCHEDULE-APPOINTME/{T action, we may want to require that if a supervisor schedules a meeting with an employee not under his supervision, a message must be sent to that employee's supervisor.This variant cannot he distinguished by classlfl [ng one entity as a supervisor and the othe£ as an employee because the variant does not apply when the supervisor is scheduling a meeting with his own employee. Also all variants in a TAXIS trausaction hierarchy must involve the same entity classes, where we may want to involve some classes only in certain variants.For example, a variant of SCHEDULE-APPOINTMENT may require that a secretary be present to take notes, introducing an entity into that variant that is not present elsewhere.We are currently trying to extend the TAXIS model so it can represent such variants.Our extensions include introducing guards to distinguish specializations and adding optional actions and entities to transactions.A guard is a boolean expression involving the entities and the database that, when satisfied, indicates the associated specialization applies.For example, the guard scheduler i__nnclass(supervisor) and scheduler # supervisor-of(schedulee) would distinguish the variant described above where an employee's supervisor must be notified of any meeting with another supervisor. The discrimination mechanism in TAXIS is a limited form of guards that only allows testing for entities in classes.[Br78] | null | null | Main paper:
abstract:
Although a great deal of research effort has been expended in support of natural language (NL) database querying, little effort has gone to NL database update.One reason for this state of affairs is that in NL querying, one can tie nouns and stative verbs in the query to database objects (relation names, attributes and domain values). In many cases this correspondence seems sufficient to interpret NL queries.NL update seems to require database counterparts for active verbs, such as "hire," "schedule" and "enroll," rather than for stative entities.There seem to be no natural candidates to fill this role.We suggest a database counterpart for active verbs, which we call verbsraphs.The verbgraphs may be used to support NL update.A verbgraph is a structure for representing the various database changes that a given verb might describe.In addition to describing the variants of a verb, they may be used to disamblguate the update command.Other possible uses of verbgraphs include, specification of defaults, prompting of the user to guide but not dictate user interaction and enforcing a variety of types of database integrity constraints.We want to support natural language interface for all aspects of database manipulation. English and English-like query systems already exist, such as ROBOT[Ha77] , TQA[Da78] , LUNAR[W076] and those described by Kaplan[Ka79] , Walker[Wa78] and Waltz [Wz75] . We propose to extend natural language interac$ion to include data modification (insert, delete, modify) rather than simply data extraction. The desirability and unavailability of natural language database modification has been noted by Wiederhold, et al.[Wi81] .Database systems currently do not contain structures for explicit modelling of real world changes.A state of a database (OB) is meant to represent a state of a portion of the real world.This research is partially supported by NSF grants IST-79-18264 and ENG-79-07794.We refer to the abstract description of the portion of the real world being modelled as the semantic data descri~tlo n (SDD). A SDD indicates a set of real world states (RWS) of interest, a DB definition gives a set of allowable database states (DBS). The correspondence between the SDD and the DB definition induces connections between DB states and real world states.The situation is diagrammed in Natural language (NL) querying of the DB requires that the correspondence between the SDD and the DB definition be explicitly stated.The query system must translate a question phrased in terms of the SDD into a question phrased in terms of a data retrieval command in the language of the DB system.The response to the command must be translated back into terms of the SDD, which yields information about the real world state.For NL database modification, this stative correspondence between DB states and real world states is not adequate.We want changes in the real world to be reflected in the DB. In Figure 2 we see that when some action in the real world causes a state change from RWSI to RWS2, we must perform some modification to the DB to change its state from DBSI to DBS2.Databasef action D}IL RWS2 ~ DBS2 Figure 2We have a means to describe the action that changed the state of the real world: active verbs. We also have a means ~o describe a change in the DB state:data manipulation language (DML) command sequences.But given a real world-action, how do we find a O~XL command sequence that will agcomplish the corresponding change in the DB?Before we explore ways to represent his active correspondence--the connection between real world actions and DB updates--, let us examine how the stative correspondence is captured for use by a NL query system.We need to connect entities and relationships in the SDD with files, fields and field values in the DB. This stative correspondence between RWS and DBS is generally specified in a system file. For example, in Harris' ROBOT system, the semantic description is implici% and it is assumed to be given in English.The entities and relationships in the description are roughly English nouns and stative verbs. The correspondence of the SDD to the DB is given by a lexicon that associates English words with files, fields and field values in the DB. This lexicon also gives possible referents for word and phrases such as "who," "where" and "how much."Consider the following example.Suppose we have an office DB of employees and their scheduled meetings, reservations for meeting rooms and messages from one employee to another. We capture this information in the following four relations, Thus we can arrive at the query i__nnEMP, ROOMKESERVE retrieve name, phone where name = reserver and room = 85 and time = 2:45pm and date = CURRE~DATE Suppose we now want to make a change to the database: "Schedule Bob Marley for 2:lbpm Friday."This request could mean schedule a meeting with an individual or schedule Bob Marley for a seminar. We want to connect "schedule" with the insertion of a tuple in either APPOINTMENT or ROO~ESERVE. Although we may have pointers from "schedule" to APPOINTMENT and ROOMRESERVE, we do not have adequate information for choosing the relation to update.Although files, fields, domains and values seem to be adequate for expressing the stative correspondence, we have no similar DB objects to which we may tie verbs that describe actions in the real world.The best we can do with files, fields and domains is to indicate what is to be modified; we cannot specify how to make the modification.We need to connect the verbs "schedule," "hire" and "reserve" with some structures that dictate appropriate D:.~ sequences that perform the corresponding updates to the DB. The best we have is a specific D~ command sequence, a transaction, for each instance of "schedule" in the real world. No single transaction truly represents all the implications and variants of the "schedule" action. "Schedule" really corresponds to a set of similar transactions, or perhaps some parameterized version of a DB transaction.induced connections~/~~ DBS2 "Schedule"4.~Parameterized Transaction (PT)The desired situation is shown in Figure 3 . We hg" ~ an active correspondence between "schedule" anG a parameterized DB transaction PT. Oifferent instances of the schedule action, S1 and $2, cause differenL changes in the real worl~ s~a~. From the active correspondence of "schedule" and PT, we want to produce the proper transaction, T1 or T2, to effect the correct change in the DB state. There is not an existing candidate for the highlevel specification language for verb descriptions.We must be able to readily express the correspondence between actions in the semantic world and verb descriptions in this high-level specification We depend heavily on this correspondence to process natural language updates, just as the statlve correspondence is used to process natural language queries.In the next section we examine these requirements in more detail and offer, by example, one candidate for the representation.Another indication of the problem of active verbs in DB shows up in looking a semantic data languages.Sematnic data models are systems for constructing precise descriptions of protions of the real world -semantic data description (SDD)using terms that come from the real world rather than a particular DB system. A SDD is a starting point for designing and comparing particular DB implementations.Some of the semantic models that have been proposed are the entity-relationshipmodel[Ch763, SDM[~81], RM/T[Co793, TAXIS[MB80] and Beta[Br78].For some of these models, methodologies exist for translating to a DB specification in various DB models, as well as for expressing the static correspondence between a SDD in the semantic model and a particular DB implementation. To express actions in these models, however, there are only terms that refer to DBs:insert, delete, modify, rather than schedule, cancel, postpone (the notable exception is Skuce[SkSO] ).While there have been a number of approaches made to NL querying, there seems to be little work on NL update. Carbonell and Hayes[CHSl] have looked at parsing a limited set of NL update commands, but they do not say much about generating the DB transactions for these commands. Kaplan and Davidson[KDSl] have looked at the translation of NL updates to transactions, but the active verbs they deal with are synonyms for DB terms, essentially following the semantic data model as above.This limitation is intentional, as the following excerpt shows:First, it is assume that the underlying database update must be a series of transactions of the same type indicated in the request.That is, if the update requests a deletion, this can only be mapped into a series of deletions in the database.While some active verbs, such as "schedule," may correspond to a single type of DB update, there are other verbs that will require multiple types of DB updates, such as "cancel," which might require sending message as well as removing an appointment. ~apian and Davidson are also trying to be domain independent, while we are trying to exploit domain-specific information.We propose a structure, a verbgraph, to represent action verbs.Verbgraph are extensions of frame-like structures used to represent verb meaning in FDRAN[Sa78] and [Sa79] .One verbgraph is associated with each sense of a verb; that structure represents all variants.A real world change is described by a sentence that contains an active verb; the DB changes are accomplished by DML command sequences.A verbgraph is used to select DNfL sequences appropriate to process the variants of verb sense. We also wish to capture that one verb that may be used as part of another: we may have a verb sense RESERVE-ROOM that may be used by itself or may be used as a subpart of the verb SCHEDULE-TALK. Figure 4 is an example of verbgraph. It models the "schedule appointment" sense of the verb "schedule."There are four basic variants we are attempting to capture; they are distinguished by whether or not the appointment is scheduled with someone in the company and whether or not a meeting room is to be reserved.There is also the possibility that the supervisor must be notified of the meeting.The verbgraph is directed acyclic graph (DAG) with 5 kinds of nodes: header, footer, information, AND (0) and OR (o). Header is the source of the graph, the footer is the sink.Every information node has one incoming and outgoing edge. An AND or OR node can have any number of incoming or outgoing edges.A variant corresponds to a directed path in the graph.We define a path to be connected subgraph such that I) the header is included; 2) the footer is included; 3) if it contains an information node, it contains the incoming and outgoing edge; 4) if it contains an AND node, it contains all incoming and outgoing edges; and 5) if it contains an OR node, it contains exactly one incoming and one outgoing edge.We can think of tracing a path in the graph by starting at the header and following its outgoing edge. Whenever we encounter an information node, we go through it. Whenever we encounter an ~ND node, the path divides and follows all outgoing edges. We may only pass through an AND node if all its incoming edges have been followed. An OR node can be entered on only one edge and we leave it by any of its outgoing edges.An example of a complete path is one that consists of theheader, footer, information nodes, A, B, D, J, and connector nodes, a, b, c, d, g, k, i, n. Although there is a direction to paths, we do not intend that the order of nodes on a path implies any order of processing the graph, except the footer node is always last to be processed. A variant of a verb sense is described by the set of all expressions in the information nodes contained in a path.Expressions in the information nodes can be of two basic types: assignment and restriction. The assignment type produces a value to be used in the update, either by input or computation; the key word input indicates the value comes from the user.Some examples of assignment are: The user must provide a value from the domain personname.2) (node labelled D in Figure 4 ) RES.date ÷ APPT.dateThe value for ApPT.date is used as the value RES.date.An example of restriction is: (node B in Figure 4) APPT.who in R1 where R1 = in EMP retrieve nameThis statement restricts the value of APPT.who to be a company employee.Also in Figure 4 , the symbols RI, R2, R 3 and R 4 stand for the retrievals R I = i_~nEMP retrieve name R 2 = i_nn EMP retrieve office where name = ApPT.name R 3 = i_~n EMP retrieve office where name = APPT.name or name = APPT.who.R 4 = in ~MP retrieve supervisor where name = APPT.name.In Node B, INFORM(APPT.who, APPT.name, 'meeting with me on %APPT.date at %APPT.time') stands for another verbgraph that represents sending a message by inserting a tuple in MAILBOX. We can treat the INFORM verbgraph as a procedure by specifying values for all the slots that must be filled from input.The input slots for INFORM are (name, from, message).One use for the verbgraphs is in support of NL directed manipulation of the DB. in particular, they can aid in variant selection.We assume that the correct verb sense has already been selected; we discuss sense selection later.Our goal is to use information in the query and user responses to questions to identify a path in the verbgraph. Let us refer again to the verbgraph for SCHEDULE-APPOINTMENT shown in Figure 4 . Suppose the user command is "Schedule an appointment with James Parker on April 13" where James Parker is a company employee.Interaction with the verbgraph proceeds as follows.First, information is extracted from the command and classified by domain.For example, James Parker is in domain personname, which can only be used to instantiate APPT.name, APPT.who, APPT2.name and APPT2.who.However, since USER is a system variable, the only slots left are APPT.who and APPT2.name, Wblch are necessarily the same. Thus we can instantiate APPT.who and ApPT2.name with "James Parker." We classify "April 13" as a calendar date and instantiate APPT.date, APPT2.date and RES.date with it, because all these must be the same. No more useful information is in the query.Second, we examine the graph to see if a unique path has been determined. In this case it has not. However, other possibilities are constrained because we know the path must go through node B. This is because the path must go through either node B or node C and by analyzing the response to retrieval RI, we can determine it must be node B (i.e., James Parker is a company employee). Now we must determine the rest of the path.One determination yet to be made is whether or not node D is in the path. Because no room was mentioned in the query, we generate from the graph a question such as '";here will the appointment take place?" Suppose the answer is "my office." Presume we can translate "my office" into the scheduler's office number.This response has two effects. First, we know that no room has to be reserved, so node D is not in the path.Second, we can fill in APPT.where in node F. Finally, all that remains to be decided is if node H is on the path. A question like "Should we notify your supervisor?" is generated.Supposing the answer is "no." Now the path is completely determined; it contains nodes A, B and F. Now that we have determined a unique path in the graph, we discover that not all the information has been filled-in in every node on the path. We now ask the questions to complete these nodes, such as '~nat time?", "For how long?" and "~at is the topic?".At this point we have a complete unique path, so the appropriate calls to INFORM can be made and the parameterized transaction in the footer can be filled-in.Note that the above interaction was quite rigidly structured.In particular, after the user issues the original command, the verbgraph instantiation program chooses the order of the subsequent data entry.There is no provision for default, or optional values.Even if optional values were allowed, the program would have to ask questions for them anyway, since the user has no opportunity to specify them subsequent to the original command. We want the interaction to be more user-dlrected. Our general principle is to allow the user to volunteer additional information during the course of the interaction, as long as the path has not been determined and values remain unspecified.We use the following interaction protocol.The user enters the initial command and hits return. The program will accept additional lines of input. However, if the user just hits return, and the program needs more information, the program will generate a question.The user answers the question, followed by a return. As before, additional information may be entered on subsequent lines.If the user hits return on an empty line, another question is generated, if necessary.Brodle[Br813 and Skuce[Sk80] both present systems for representing DB change.Skuce's goal is to provide an English-like syntax for DB procedure specification.Procedures have a rigid format and require all information to be entered at time of invocation in a specific order, as with any computer subprogram.Brodie is attempting to also specify DB procedures for DB change.He allows some information to be specified later, but the order is fixed.Neither allow the user to choose the order of entry, and neither accomodates variants that would require different sets of values to be specified.However, like our method, and unlike Kaplan and Davidson[KD81] , they attempt to model DB changes that correspond to real world actions rather than just specifying English synonyms for single DB come, ands.Certain constraints on updates are implicit on verbgraphs, such as APPT.where ÷ input from R3, which constrains the location of the meeting to be the office of one of the two employees.We also use verbgraphs to maintain database consistency. Integrity constraints take two forms: constraints on a single state and constraints on successive database states.The second kind is harder to enforce; few systems support constraints on successive states.Verbgraphs provide many opportunities for specifying various defaults.First, we can specify default values, which may depend on other values. Second, we can specify default paths.Verbgraphs are also a means for specifying non-DB operations. For example, if an appointment is made with someone outside the company, generate a confirmation letter to be sent.All of the above discussion has assumed we are selecting a variant where the sense has already been determined.In general sense selection, being equivalent to the frame selection problem in Artifical Intelligence[CW76], is very difficult. We do feel that verbgraph will aid in sense selection, but will not be as efficacious as for variant selection.In such a situation, perhaps the English parser can help disambiguate or we may want to ask an appropriate question to select the correct sense, or as a last resort, provide menu selection,We are currently considering hierarchically structured transactions, as used in the TAXIS semantic model [MB80], as an alternative to verbgraphs.Verbgraphs can be ambiguous, and do not lend themselves to top-down design. Hierarchical transactions would seem to overcome both problems. Hierarchical transactions in TAXIS are not quite as versatile as verbgraphs in representing variants. The hierarchy is induced by hierarchies on the entity classes involved.Variants based on the relationship among particular entities, as recorded in the database, cannot be represented. For example, in the SCHEDULE-APPOINTME/{T action, we may want to require that if a supervisor schedules a meeting with an employee not under his supervision, a message must be sent to that employee's supervisor.This variant cannot he distinguished by classlfl [ng one entity as a supervisor and the othe£ as an employee because the variant does not apply when the supervisor is scheduling a meeting with his own employee. Also all variants in a TAXIS trausaction hierarchy must involve the same entity classes, where we may want to involve some classes only in certain variants.For example, a variant of SCHEDULE-APPOINTMENT may require that a secretary be present to take notes, introducing an entity into that variant that is not present elsewhere.We are currently trying to extend the TAXIS model so it can represent such variants.Our extensions include introducing guards to distinguish specializations and adding optional actions and entities to transactions.A guard is a boolean expression involving the entities and the database that, when satisfied, indicates the associated specialization applies.For example, the guard scheduler i__nnclass(supervisor) and scheduler # supervisor-of(schedulee) would distinguish the variant described above where an employee's supervisor must be notified of any meeting with another supervisor. The discrimination mechanism in TAXIS is a limited form of guards that only allows testing for entities in classes.[Br78]
Appendix:
| null | null | null | null | {
"paperhash": [
"hammer|database_description_with_sdm:_a_semantic_database_model",
"hayes|multi-strategy_construction-specific_parsing_for_flexible_data_base_query_and_update",
"mylopoulos|a_language_facility_for_designing_database-intensive_applications",
"codd|extending_the_database_relational_model_to_capture_more_meaning",
"salveter|inferring_conceptual_graphs",
"salveter|inferring_conceptual_structures_from_pictorial_input",
"chen|the_entity-relationship_model:_toward_a_unified_view_of_data",
"waltz|natural_language_access_to_a_large_data_base:_an_engineering_approach",
"kaplan|interpreting_natural_language_database_updates",
"damerau|the_derivation_of_answers_from_logical_forms_in_a_question_answering_system"
],
"title": [
"Database description with SDM: a semantic database model",
"Multi-Strategy Construction-Specific Parsing for Flexible Data Base Query and Update",
"A language facility for designing database-intensive applications",
"Extending the database relational model to capture more meaning",
"Inferring Conceptual Graphs",
"Inferring Conceptual Structures From Pictorial Input",
"The entity-relationship model: toward a unified view of data",
"Natural language access to a large data base: an engineering approach",
"Interpreting Natural Language Database Updates",
"The Derivation of Answers from Logical Forms in a Question Answering System"
],
"abstract": [
"SDM is a high-level semantics-based database description and structuring formalism (database model) for databases. This database model is designed to capture more of the meaning of an application environment than is possible with contemporary database models. An SDM specification describes a database in terms of the kinds of entities that exist in the application environment, the classifications and groupings of those entities, and the structural interconnections among them. SDM provides a collection of high-level modeling primitives to capture the semantics of an application environment. By accommodating derived information in a database structural specification, SDM allows the same information to be viewed in several ways; this makes it possible to directly accommodate the variety of needs and processing requirements typically present in database applications. The design of the present SDM is based on our experience in using a preliminary version of it.\nSDM is designed to enhance the effectiveness and usability of database systems. An SDM database description can serve as a formal specification and documentation tool for a database; it can provide a basis for supporting a variety of powerful user interface facilities, it can serve as a conceptual database model in the database design process; and, it can be used as the database model for a new kind of database management system.",
"The advantages of a multi-strategy, construction-specific approach to parsing in applied natural language processing are explained through an examination of two pilot parsers we have constructed. Our approach exploits domain semantics and prior knowledge of expected constructions, using multiple parsing strategies each optimized to recognize different construction types. It is shown that a multi strategy approach leads to robust, flexible, and efficient parsing of both grammatical and ungrammatical input in limited-domain, task oriented, natural language interfaces. We also describe plans to construct a single, practical, multi-strategy parsing system that combines the best aspects of the two simpler parsers already implemented into a more complex, embedded-constituent control structure. Finally, we discuss some issues in data base access and update, and show that a construction-specific approach, coupled with a case structured data base description, offers a promising approach to a unified, interactive data base query and update system.",
"TAXIS, a language for the design of interactive information systems (e.g., credit card verification, student-course registration, and airline reservations) is described. TAXIS offers (relational) database management facilities, a means of specifying semantic integrity constraints, and an exception-handling mechanism, integrated into a single language through the concepts of class, property, and the IS-A (generalization) relationship. A description of the main constructs of TAXIS is included and their usefulness illustrated with examples.",
"During the last three or four years several investigators have been exploring “semantic models” for formatted databases. The intent is to capture (in a more or less formal way) more of the meaning of the data so that database design can become more systematic and the database system itself can behave more intelligently. Two major thrusts are clear. (1) the search for meaningful units that are as small as possible—atomic semantics; (2) the search for meaningful units that are larger than the usual n-ary relation—molecular semantics. In this paper we propose extensions to the relational model to support certain atomic and molecular semantics. These extensions represent a synthesis of many ideas from the published work in semantic modeling plus the introduction of new rules for insertion, update, and deletion, as well as new algebraic operators.",
"This paper investigates the mechanisms a program may use to learn conceptual structures that represent natural language meaning. A computer program named Moran is described that infers conceptual structures from pictorial input data. Moran is presented with “snapshots” of an environment and an English sentence describing the action that takes place between the snapshots. The learning task is to associate each root verb with a conceptual structure that represents the types of objects that participate in the action and the changes the objects undergo during the action. Four learning mechanisms are shown to be adequate to accomplish this learning task. The learning mechanisms are described along with the conditions under which each is invoked and the effect each has on existing memory structures. The conceptual structure Moran inferred for one root verb is shown.",
"This paper discusses the mechanisms a program may use to learn conceptual structures that represent natural language meaning. A computer program named Moran is described that infers conceptual structures from simulated pictorial input data. Moran is presented “snapshots” of an environment and an English sentence describing the action that takes place between the snapshots. The learning task is to associate each root verb with a conceptual structure that represents the types of objects that participate in the action and the changes the objects undergo during the action. Four learning mechanisms are shown to be adequate to accomplish this learning task. The learning mechanisms are described along with the conditions under which each is invoked and the effect each has on existing memory structures.",
"A data model, called the entity-relationship model, which incorporates the semantic information in the real world is proposed. A special diagramatic technique is introduced for exhibiting entities and relationships. An example of data base design and description using the model and the diagramatic technique is given. The implications on data integrity, information retrieval, and data manipulation are discussed.",
"An intelligent program which accepts natural language queries can allow anon-technical user to easily obtain information from a large non-uniform data base. This paper discusses the design of a program which will tolerate a wide variety of requests including ones with pronouns and referential phrases. The system embodies a certain amount of common sense, so that for example, it \"knows when it does or does not understand a particular request and it can bypass actual data base search in answering unreasonable requests. The system is conceptually simple and could be easily adapted to other data bases.",
"Although the problems of querying databases in natural language are well understood, the performance of database updates via natural language introduces additional difficulties. This thesis examines the problems encountered in interpreting natural language updates, and describes an implemented system that performs simple updates. \nThe difficulties associated with natural language updates results from the fact that the user will naturally phrase requests with respect to his conception of the domain, which may be a considerable simplification of the actual underlying database structure. Updates that are meaningful and unambiguous from the user's standpoint may not translate into reasonable changes to the underlying database. \nThe PIQUE system (Program for Interpretation of Queries and Updates in English) operates by maintaining a simple model of the user and interpreting update requests with respect to that model. For a given request, a limited set of \"candidate updates\"--alternative ways of fulfilling the request--are considered, and ranked according to a set of domain-independent heuristics that reflect general properties of \"reasonable\" updates. The leading candidate may be performed, or the highest ranking alternatives presented to the user for selection. The resultant action may also include a warning to the user about unanticipated side effects, or an explanation for the failure to fulfill a request. \nThis thesis describes the PIQUE system in detail, presents examples of its operation, and discusses the effectiveness of the system with respect to coverage, accuracy, efficiency, and portability. The range of behaviors required for natural language update systems in general is discussed, and implications of updates on the design of data models are briefly considered.",
"This papex"
],
"authors": [
{
"name": [
"M. Hammer",
"D. McLeod"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"P. Hayes",
"J. Carbonell"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"J. Mylopoulos",
"PHILIP A. Bernstein",
"Harry K. T. Wong"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"E. Codd"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"Sharon C. Salveter"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"Sharon C. Salveter"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"Peter P. Chen"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"D. Waltz"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"S. Jerrold Kaplan",
"J. Davidson"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"F. J. Damerau"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
}
]
}
],
"arxiv_id": [
null,
null,
null,
null,
null,
null,
null,
null,
null,
null
],
"s2_corpus_id": [
"207654596",
"18819553",
"267070888",
"17517212",
"5202078",
"15190887",
"52801746",
"62861335",
"26893302",
"219308139"
],
"intents": [
[],
[],
[],
[],
[],
[],
[],
[],
[],
[]
],
"isInfluential": [
false,
false,
false,
false,
false,
false,
false,
false,
false,
false
]
} | Problem: The lack of research effort in supporting natural language database update compared to querying.
Solution: Proposing the use of verbgraphs as a database counterpart for active verbs to facilitate natural language database update, enabling the representation of various database changes that active verbs might describe and supporting NL update commands. | 512 | 0.021484 | null | null | null | null | null | null | null | null |
af37c833f3f5522493d688c993598e7020f36472 | 5926515 | null | A Model of Early Syntactic Development | AMBER is a model of first language acquisition that improves its performance through a process of error recovery. The model is implemented as an adaptive production system that introduces new condition-action rules on the basis of experience. AMBER starts with the ability to say only one word at a time, but adds rules for ordering goals and producing grammatical morphemes, based on comparisons between predicted and observed sentences. The morpheme rules may be overly general and lead to errors of commission; such errors evoke a discrimination process, producing more conservative rules with additional conditions. The system's performance improves gradually, since rules must be relearned many times before they are used. AMBER'S learning mechanisms account for some of the major developments observed in children's early speech. | {
"name": [
"Langley, Pat"
],
"affiliation": [
null
]
} | null | null | 20th Annual Meeting of the Association for Computational Linguistics | 1982-06-01 | 21 | 5 | null | In this paper, I present a model that attempts to explain the regularities in children's early syntactic development. The model is called AMBER, an acronym for Acquisition Model Based on Error Recovery. As its name implies, AMBER learns language by comparing its own utterances to those of adults and attempting to correct any errors. The model is implemented as an adaptive production system -a formalism well-suited to modeling the incremental nature of human learning. AMEER focuses on issues such as the omission of content words, the occurrence of telegraphic speech, and the order in which function words are mastered. Before considering AMBER in detail, I will first review some major features of child language, and discuss some earlier models of these phenomena.Children do not learn language in an all.or.none fashion. They begin their linguistic careers uttering one word at a time, and slowly evolve through a number of stages, each containing more adult-like speech than the one before. Around the age of one year, the child begins to produce words in isolation, and continues this strategy for some months. At approximately 18 months, the child begins to combine words into meaningful sequences. In order-based languages such as English, the child usually follows the adult order. Initially only pairs of words are produced, but these are followed by three-word and later by four-word utterances. The simple sentences occurring in this stage consist almost entirely of content words, while grammatical morphemes such as tense endings and prepositions are largely absent.During the period from about 24 to 40 months, the child masters the grammatical morphemes which were absent during the previous stage. These "function words" are learned gradually; the time between the initial production of a morpheme and its mastery may be as long as 16 months. Brown (1973) has examined the order in which 14 English morphemes are acquired, finding the order of acquisition to be remarkably consistent across children. In addition, those morphemes with simpler meanings and involved in fewer transformations are learned earlier than more complex ones. These findings place some strong constraints on the learning mechanisms one postulates for morpheme acquisition. Now that we have reviewed some of the major aspects of child language, let us consider the earlier attempts at modeling these phenomena. Computer programs that learn language can be usefully divided into two groups: those which take advantage of semantic feedback, and those which do not. In general, the early work concerned itself with learning grammars in the absence of information about the meaning of sentences. Examples of this approach can be found in Solomonoff (1959) , Feldman (1969) and Homing (1969) . Since children almost certainly have semantic information available to them, I will not focus on their research here. However, much of the early work is interesting in its own right, and some excellent systems along these lines have recently been produced by Berwick (1980) and Wolff (1980) .In the late 1960's, some researchers began to incorporate semantic information into their language learning systems. The majority of the resulting programs showed little concern with the observed phenomena, including Siklossy's ZBIE (1972) , Ktein's AUTOLING (1973), Hedrick's production system model (1976), Anderson's LAS (1977) , and Sembugamoorthy's PLAS (1979) . These systems failed as models of human language acquisition in two major areas. First, they learned language in an all-or.none manner, and much too rapidly to provide useful models of child language. Second, these systems employed conservative learning strategies in the hope of avoiding errors. In contrast, children themselves make many errors in their early constructions, but eventually recover from them.However, a few researchers have attempted to construct plausible models of the child's learning process. For example, Kelley (1967) has described an "hypothesis testing" model that learned successively more complex phrase structure grammars for parsing simple sentences. As new syntactic classes became available, the program rejected its current grammar in favor of a more accurate one. Thus, the model moved from a stage in which individual words were viewed as "things" to the more sophisticated view that "subjects" precede "actions". One drawback of the model was that it could not learn new categories on its own initiative; instead, the author was forced to introduce them manually. Reeker (1976) has described PST, another theory of early syntactic development. This model assumed that children have limited short term memories, so that they store onty portions of an adult sample sentence. The model compared this reduced sentence to an internally generated utterance, and differences between the two were noted. Six types of differences were recognized (missing prefixes, missing suffixes, missing infixes, substitutions, extra words, and transpositions), and each led to an associated alteration of the grammar. PST accounted for children's omission of content words and the gradual increase in utterance length. The limited memory hypothesis also explained the telegraphic nature of early speech, though Reeker did not address the issue of function word acquisition. Overgeneralizations did occur in PST, but the model could revise its grammar upon their discovery, so as to avoid similar errors in the future. PST also helped account for the incremental nature of language acquisition, since differences were addressed one at a time and the grammar changed only slowly. Selfridge (1981) has described CHILD, another program that attempted to explain some of the basic phenomena of first language acquisition. This system began by learning the meanings of words in terms of a conceptual dependency representation. Word meanings were initially overly specific, but were generalized as more examples were encountered. As more words were learned and their definitions became less restrictive, the length of CHILD'S utterances increased. CHILD differed from other models of language learning by incorporating, a nonlinguistic component. This enabled the system to correctly respond to adult sentences such as Put the ba/I in the box, and led to the appearance that the system understood language before it could produce it. Of course, this strategy sometimes led to errors in comprehension. Coupled with the disapproval of a tutor, such errors were one of the major spurs to the learning of word orders. Syntactic knowledge was stored with the meanings of words, so that the acquisition of syntax necessarily occurred after the acquisition of individual words.Although tl~ese systems fare much better as psychological models than other language learning programs, they have some important limitations. We have seen that Kelley's system required syntactic classes to be introduced by hand, making his explanation less than satisfactory. Selfridge's CHILD was much more robust than Kelley's program, and was unique in modeling children's use of nonlinguistic cues for understanding. However, CHILD'S explanation for the omission of content words -that those words are not yet known -was implausible, since children often omit words that they have used in previous utterances. Reeker's PST explained this phenomenon through a limited memory hypothesis, which is consistent with our knowledge of children's memory skills. Still, PST included no model of the process through which memory improved; in order to simulate the acquisition of longer constructions, Reeker would have had to increase the system's memory size by hand. Both CHILD and PST learned relatively slowly, and made mistakes of the general type observed with children. Both systems addressed the issue of error recovery, starting off as abominable language users, but getting progressively better with time. This is a promising approach that I' attempt to develop it in its extreme form in the following pages.In the preceding pages, we have seen that AMEER offers explanations for a number of phenomena observed in children's early speech. These include the omission of content words and morphemes, the gradual manner in which these omissions are overcome, and the order in which grammatical morphemes are mastered.As a psychological model of early syntactic development, AMEER constitutes an improvement over previous language learning programs. However, this does not mean that the model can not be improved, and in this section I outline some directions for future research efforts.One of the criteria by which any scientific theory can be judged is simplicity, and this is one dimension along which AMEER could stand some improvement. In particular, some of AMBER'S learning heuristics for coping with errors of omission incorporate considerable knowledge about the task of learning a language. For example, AMEER knows the form of the rules it will learn for ordering goals and producing morphemes. Another questionable piece of information is the distinction between major and minor meanings that lets AMEER treat content words and morphemes as completely separate entities. One might argue that the child is born with such knowledge, so that any model of language acquisition should include it as well, However, until such innateness is proven, any model that can manage without such information must be considered simlsler, more elegant, and more desirable than a model that requires it to learn a language.In contrast to these domain-apecific heuristics, AMBER'S strategy for dealing with errors of commission incorporates an apparently domain-independent learning mechanism -the discrimination process. This heuristic can be applied to any domain in which overly general rules lead to errors, and can be used on a variety of representations to discover the conditions under which such rules should be selected. In addition to language development, the discrimination process has been applied to concept learning (Anderson, Kline, and Beasely, 1979; Langley, 1982) and strategy acquisition (Brazdil, 1978; Langley, 1982) ~ Langley (1982) has discussed the generality and power of discrimination-based approaches to learning in greater detail. As we shall see below, this heuristic may Provide a more plausible explanation for the learning of word order. Moreover, it opens the way for dealing with some aspects of language acquisition that AMBER has so far ignored -the learning of word/concept links and the mastering of irregular constructions.AMBER learns the order of content words through a two-stage process, first learning to prefer some relations (like agent) over others (like action or object), and then learning the relative orders in which such relations should be described. The adaptive productions responsible for these transitions contain the actual form of the rules that are learned; the particular rules that result are simply instantiations of these general forms. Ideally, future versions of AMBER should draw on more general learning strategies to acquire ordering rules.Let us consider how the discrimination mechanism might be applied to the discovery of such rules. In the existing system, the generation of "ball" without a preceding "Daddy" is viewed as an error of omission. However, it could as easily be viewed as an error of commission in which the goal to describe the object was prematurely satisfied. In this case, one might use discrimination to generate a variant version of the start rule:If you want to describe node1, and node2 is the object of node1, and node3 is the agent of nodel, and you have described node3, then describe node2.This production is similar to the start rule, except that it will set up goals only to describe the object of an event, and then only if the agent has already been described. In fact, this rule is identical to the agent-object rule discussed in an earlier section; the important point is that it is also a special case of the start rule that might be learned through discrimination when the more general rule fires inappropriately. The same process could lead to variants such as the agent rule, which express preferences rather than order information.Rather than starting with knowledge of the forms of rules at the outset, AMBER would be able to determine their form through a more general learning heuristic.The current version of AMSEn relies heavily on the representational distinction between major meanings and mcJulations of those meanings. Unfortunately, some languages express through content wor~s what others express through grammatical morphemes. Future versions of the system should lessen this distinction by using the same representation for both types o[ information. In addition, the model might employ a single production for learning to produce both content words and morphemes; thus, the program would lack the speak rule described earlier, but would construct specific versions of this production for particular words and morphemes. This would also remedy the existing model's inability to learn new connections between words and concepts.Although the resulting rules would probably be overly general, AMBER would be able to recover from the resulting errors by additional use of the discrimination mechanism.The present model also makes a distinction between morphemes that act as prefixes (such as "the") and those that act as suffixes (such as "ing"). Two separate learning rules are responsible for recovering from function word omissions, and although they are very similar, the conditions under which they apply and the resulting morpheme rules are different. Presumably, if a single adaptive production for learning words and morphemes were introduced, it would take over the functions of both the prefix and suffix rules. If this approach can be successfully implemented, then the current reliance on pause information can be abandoned as welt, since the pauses serve only to distinguish suffixes from prefixes. Such a reorganization would considerably simplify the theory, but it would also lead to two complications. First, the resulting system would tend to produce utterances like "Daddy ed" or "the bounce", before it learned the correct conditions on morphemes through discrimination. (This problem is currently avoided by including information about the relation when a morpheme rule is first built, but this requires domain-specific knowledge about the language learning task.) Since children very seldom make such errors, some other mechanism must be found to explain their absence, or the model's ability to account for the observed phenomena will suffer, Second, if pause information (and the ability to take advantage of such information) is removed, the system wilt sometimes decide a prefix is a suffix and vice versa. For example, AMBER might construct a rule to say "ing" before the object of an event is described, rather than after the action has been mentioned. However, such variants would have little effect on the system's overall performance, since they would be weakened if they ever led to deviant utterances, and they would tend to be learned less often than the desired rules in any case. Thus, the strengthening and weakening processes would tend to direct search through the space of rules toward the correct segmentation, even in the absence of pause information.Another of AMBER'S limitations lies in its inability to learn irregular constructions such as "men" and "ate". However, by combining discrimination and the approach to learning word/concept links described above, future implementations should fare much better along this dimension. For example, consider the irregular noun "foot", which forms the plural "feet". Given a mechanism for connecting words and concepts, AMBER might initially form a rule connecting the concept *foot to the word "foot". After gaining sufficient strength, this rule would say "~?'~+" whenever seeing an example of the concept °foot. Upon encountering an occurrence of "feet", the system would note the error of commission and call on discrimination. This would lead to a variant rule that produced "foot" only when a sing/e marker was present. Also, a new rule connecting "foot to "feet" would be created. Eventually, this new rule would also lead to errors of commission, and a variant with a plural condition would come to replace it.Dealing with the rule for producing the plural marker "s" would be somewhat more difficult. Although AMBER might initially learn to say "foot" and "feet" under the correct circumstances, it would eventually learn the general rule for saying "s" after plural agents and objects. This would lead to constructions such as "feet s", which have been observed in children's utterances. The system would have no difficulty in detecting such errors of commission, but the appropriate response is not so clear. Conceivably, AMBER could create variants of the "s" rule which stated that the concept to be described must not be =foot. However, a similar condition would atso have to be included for every situation in which irregular pluralization occurred (deer, man, cow, and so on). Similar difficulties arise with irregular constructions for the past tense.A better solution would have AMBER construct a special rule for each irregular word, which "imagined" that the inflection had already been said. Once these productions became stronger than the %" and "ed" rules, they would prevent the latter's application and bypass the regular constructions in these cases. Overly general constructions like "foot s" constitute a related form of error. Although AMBER would generate such mistakes before the irregular form was mastered, it would not revert to the overgeneral regular construction at a later point, as do many children. The area of irregular constructions is clearly a phenomenon that deserves more attention in the future. | null | null | Although Reeker's PST and Selfridge's CHILD address the transition from one-word to multi-word utterances, we have seen that problems exist with both accounts. Neither of these programs focus on the acquisition of function words, their explanations of content word omissions leave something to be desired, and though they learn more slowly than other systems, they still learn more rapidly than children. In response to these limitations, the goals of the current research are:• Account for the omission of content" words, and the eventual recovery from such omissions. • Account for the omission of function words, and the order in which these morphemes are mastered.• Account for the gradual nature of both these linguistic developments. In this section I provide an overview of AMBER, a model that provides one set of answers to these questions. Since more is known about children's utterances than their ability to understand the utterances of others, AMBER models the learning of generation strategies, rather than strategies for understanding language.Selfridge's and Reeker's models differ from other language learning systems in their concern with the problem of recovering from errors. The current research extends this idea even further, since all of AMBER'S learning strategies operate through a process of error recovery. 1 The model is presented with three pieces of information: a legal sentence, an event to be described, and a main goal or topic of the sentence. An event is represented as a semantic network, using relations like agent, action, object, size, color, and type. The specification of one of the nodes as the main topic allows the system to restate the network as a tree structure, and it is from this tree that AMBER generates a sentence. If this sentence is identical to the sample sentence, no learning is required. If a disagreement between the two sentences is found, AMBER modifies its set of rules in an attempt to avoid similar errors in the future, and the system moves on to the next example.AMBER'S performance system is stated as a set of conditionaction rules or productions that operate upon the goal tree to produce utterances. Although the model starts with the potential for producing (unordered) telegraphic sentences, it can initially generate only one word at a time. To see why this occurs, we must consider the three productions that make up AMBER'S initial performance system. The first rule (the start rul~) is responsible for establishing subgoals; it may be paraphrased as:If you want to describe node1, and node2 is in relation to node1, then describe node2.Matching first against the main goal node, this rule selects one of the nodes below it in the tree and creates a subgoal to describe that node. This rule continues to establish lower level goals until a terminal node is reached. At this point, a second production (the speak rule) is matched; this rule may be stated:If you want to describe a conceptt and word is the word for concept, then say word and note that concept has been described.This production retrieves the word for the concept AMBER wants to describe, actually says this word, and marks the terminal goal as satisfied. Once this has been done, the third and final performance production becomes true. This rule matches whenever a subgoal has been satisfied, and attempts to mark the supergoal as satisfied; it may be paraphrased as:If you want to describe node1, and node2 is in re/ation to nodel, and node2 has already been described, then note that node1 has been described.Since the stop rule is stronger 3 than the start rule (which would like to create another subgoal), it moves back up the tree, marking each of the active goals as satisfied (including the main goal). As a result, AMBER believes it has successfully described an event after it has uttered only a single word. Thus, although the model starts with the potential for producing multi.word utterances, it must learn additional rules (and make them stronger than the stop rule) before it can generate multiple content words in the correct order.In general, AMBER learns by comparing adult sentences to the sentences it would produce in the same situations. These predictions reveal two types of mistakes -errors of omission and errors of commission. These errors are detected by additional/earning productions that are responsible for creating new performance rules. Thus, AMBER is an example of what Waterman (1975) has called an adaptive production system, which modifies its own behavior by inserting new conditionaction rules. Below I discuss AMBER'S response to errors of omission, since these are the first to occur and thus lead to the system's first steps beyond the one-word stage. I consider the omission of content words first, and then the omission of grammatical morphemes. Finally, I discuss the importance of errors of commission in discovering conditions on the production of morphemes.AMBER'S initial self-modifications result from tile failure to predict content words. Given its initial ability to say one word at a time, the system can make two types of content word omissions -it can fail to predict a word before a correctly predicted one, or it can omit a word after a correctly predicted one. Rather different rules are created in each case. For example, imagine that Daddy is bouncing a ball, and suppose that AMBEa predicted only the word "ball", while hearing the sentence "Daddy is bounce ing the ball". In this case, one of the system's learning rules would note the omitted content word 3The notion of strength plays an important role in AMBER'S explanation of language learning. When a new rule is created, it is given a low initial strength, but this is increased whenever that rule is relearned. And since stronger productions are preferred to their weaker competitors, rules that have been learned many times determine behavior."Daddy" before the content word "ball", and an agent production would be created: AGENT If you want to describe event1, and agent1 is the agent of event1, then desc ribe agent1.Although I do not have the space to describe the responsible learning rule in detail, I can say that it matches against situations in which one content word is omitted before another, and that it always constructs new productions with the same form as the agent rule described above. In this case, it would also create a similar rule for describing actions, based on the omitted "bounce". Note that these new productions do not give AMBER the ability to say more than one word at a time. They merely increase the likelihood that the program will describe the agent or action of an event instead of the object.However, as AMBER begins to prefer agents to actions and actions to objects, the probability of the second type of error (omitting a word after a correctly predicted one) increases. For example, suppose that Daddy is again bouncing a ball, and the system says "Daddy" while it hears "Daddy is bounce ing the ball". In this case, a slightly different production is created that is responsible for ordering the creation of goals. Since the agent relation was described but the object was omitted, an agent. object rule is constructed:If you want to describe event1, and agent1 is the agent of event1, and you have described agent1, and object1 is the object of event1, then describe object1.Together with the agent rule shown above, this production lets AMBER produce utterances such as "Daddy ball". Thus, the model provides a simple explanation of why children omit some content words in their early multi-word utterances. Such rules must be constructed many times before they become strong enough to have an effect, but eventually they let the system produce telegraphic sentences containing all relevant content words in the standard order and lacking only grammatical morphemes.Once AMBER begins to correctly predict content words, it can learn rules for saying grammatical morphemes as well. As with content words, such rules are created when the system hears a morpheme but fails to predict it in that position. For example, suppose the. program hears the sentence "Daddy ° is bounce ing "the ball", 4 but predicts only "Daddy bounce ball". In this case, the following rule is generated:ING-1 If you have described action1,and action1 is the action of event1, then say ING.Once it has gained sufficient strength, this rule will say the morpheme "ing" after any action word. As stated, the production is overly general and will lead to errors of commission. I consider AMBER'S response to such errors in the following section.4Asterisks represent pauses in the adult sentence. These cues are necessary for AMBER to decide that a morpheme like "is" is a prefix for "bounce" instead of a suffix for "Daddy".The omission of prefixes leads to very similar rules. In the above example, the morpheme "is" was omitted before "bounce", leading to the creation of a prefix rule for producing the missing function word: IS-1 If you want to describe action1, and action I is the action of event1, then say IS.Note that this rule will become true before an action has been described, while the rule ing-I can apply only after the goal to describe the action has been satisfied. AMBER uses such conditions to control the order in which morphemes are produced. Figure 1 shows AMBER'S mean length of utterance as a function of the number of sample sentences (taken in groups of five) seen by the program, b As one would expect, the system starts with an average of around one word per utterance, and the length slowly increases with time. AMBER moves through a two. word and then a three-word stage, until it eventually produces sentences lacking only grammatical morphemes. Finally, the morphemes are included, and adult-like sentences are produced. The incremental nature of the learning curve results from the piecemeal way in which AMBER learns rules for producing sentences, and from the system's reliance on the strengthening process. Errors of commission occur when AMBER predicts a morpheme that does not occur in the adult sentence. These errors result from the overly general prefix and suffix rules that we saw in the last section. In response to such errors, AMBER calls on a discrimination routine in an attempt to generate more conservative productions with additional conditions. ~ Earlier, I considered a rule (is-1) for producing "is" before the action of an event. As stated, this rule would apply in inappropriate situations as well as correct ones. For example, suppose that AMBER learned this rule in the context of the sentence "Daddy is bounce ing the ball". Now suppose the system later uses this rule to predict the same sentence, but that it instead hears the sentence "Daddy was bounce ing the ball".5AMBER iS implemented on a PDP KL. tO in PRISM (Langley and Neches, t981), an adaptive production system language designed for modeling learning phenomena; the run summarized in Figure t took approximately 2 hours of CPU time.At this point, AMBER'S discrimination routine would retrieve the rule responsible for predicting "is" and lowers its strength; it would also retrieve the situation that led to the faulty application, passing this information to the discrimination routine. Comparing the earlier good case to the current bad case, the discrimination mechanism finds only one difference -in the good example, the action node was marked present, while no such marker occurred during the faulty application. The result is a new production that is identical to the original rule, except that an additional condition has been included:If you want to describe action1, and action I is the action of event1, and action1 is in the present, then say IS.This new condition will let the variant rule fire only when the action is marked as occurring in the present. When first created, the is-2 production is too weak to be seriously considered. However, as it is learned again and again, it will eventually come to mask its predecessor. This transition is aided by the weakening of the faulty is-1 rule each time it leads to an error.Once the variant production has gained enough strength to apply, it will produce its own errors of commission. For example, suppose AMBER uses the is-2 rule to predict "The boy s is bounce ing the ball", while the system hears "The boy s are bounce ing the ball". This time the difference is more complicated. The fact that the action had an agent in the good situation is no help, since an agent was present during the faulty firing as well. However, the agent was singular in the first case but not during the second. Accordingly, the discrimination mechanism creates a secondvariant:If you want to describe action1, and action1 is the action of event1, and action1 is in the present, and agent1 is the agent of event1, and agent1 is singular, then say IS.The resulting rule contains two additional conditions, since the learning process was forced to chain through two elements to find a difference.Together, these conditions keep the production from saying the morpheme "is" unless tl~e agent of the current action is singular in number.Note that since the discrimination process must learn these sets of conditions separately, an important prediction results: the more complex the conditions on a morpheme's use, the longer it will take to master.For example, three sets of conditions are required for the "is" rule, while only a single condition is needed for the "ing" production. As a result, the former is mastered after the latter, just as found in children's speech. Table 1 presents the order of acquisition for the six classes of morpheme learned by AMBER, and the order in which the same morphemes were mastered by Brown's children. The number of sample sentences the model required before mastery are also included.6Anderson's ALAS (1981) system uses a very similar process to recover from overly general morpheme rules. AMBER and AL, ~ :~ have much in common, both having grown out of discussions between Anderson and the author. Although there is considerable overlap, ALAS generally accounts for later developments in children's speech than does AMBER. The general trend is very similar for the children and the model, but two pairs of morphemes are switched. For AMEER, the plural construction was mastered before "ing", while in the observed data the reverse was true. However, note that AMBER mastered the progressive construction almost immediately after the plural, so this difference does not seem especially significant. Second, the model mastered the articles "the", "a", and "some" before the construction for past tense. However, Brown has argued that the notions of "definite" and "indefinite" may be more complex than they appear on the surface; thus, AMBER'S representation of these concepts as single features may have oversimplified matters, making articles easier to learn than they are for the child.Thus, the discrimination process provides an elegant explanation for the observed correlation between a morpheme's complexity and its order of acquisition. Observe that if the conditions on a morpheme's application were learned through a process of generalization such as that proposed by Winston (1970) , exactly the opposite prediction would result. Since generalization operates by removing conditions which differ in successive examples, simpler rules would be finalized later than more complex ones. Langley (1982) has discussed the differences between generalization-based and discrimination. based approaches to learning in more detail. Table 1 . Order of morpheme mastery by the child and AMBER.Some readers will have noted the careful crafting of the above examples, so that only one difference occurred in each case. This meant that the relevant conditions were obvious, and the discrimination mechanism was not forced to consider alternate corrections. In order to more closely model the environment in which children learn language, AMBER was presented with randomly generated sentence/meaning pairs. Thus, it was usually impossible to determine the correct discrimination that should be made from a single pair of good and bad situations. AMBER'S response to this situation is to create all possible discriminations, but to give each of the variants a low initial strengtl~. Correct rules, or rules containing at least some correct conditions, are learned more often than rules containing spurious conditions. And since AMBER strengthens a production whenever it is relearned, variants with useful conditions come to be preferred over their competitors. Thus, AMEER may be viewed as carrying out a breadth-first search through the space of possible rules, considering many alternatives at the same time, and selecting the best of these for further attention. Only variants that exceed a certain threshold (generally those with correct conditions) lead to new errors of commission and additional variants. Eventually, this search process leads to the correct rule, even in the presence of many irrelevant features. Figure 2 presents the learning curves for the "ing" morpheme. Since AMEER initially lacks an "ing" rule, errors of commission abound at the outset, but as this production and its variants are strengthened, such errors decrease. In contrast, errors of commission are absent at the beginning, since AMEER lacks an "ing" rule to make false predictions. As the morpheme rule becomes stronger, errors of commission grow to a peak, but they disappear as discrimination takes effect. By the time it has seen 63 sample sentences, the system has mastered the present progressive construction. ,In spirit, AMBER is very similar to Reeker's model, though they differ in many details. Historically, PST had no impact on the development of AMBER. The initial plans for AMBER arose from discussions with John R..Anderson in the fall of 1979, while I did not become aware of Reeker's work until the fall of 1980.2For the sake of clarity, I will be presenting only English paraphrases of the actual PRISM productions. All variables are italicized; these may match against any symbol, but all occurrences of a variable -" ~'. ~,~atch to the same element. | In conclusion, AMBER provides explanations for severat important phenomena observed in children's early speech. The system accounts for the one-word stage and the child's transition to the telegraphic stage. Although AMBER and children eventually learn to produce all relevant content words, both pass through a stage where some are omitted. Because it learns sets of conditions one at a time, the discrimination process explains the order in which grammatical morphemes are mastered. Finally, AMBER learns gradually enough to provide a plausible explanation of the incremental nature of first language acquisition. Thus the system constitutes a significant addition to our knowledge of syntactic development.Of course, AMBER has a number of limitations that should be addressed in future research. Successive versions should be able to learn the connections between words and concepts, should reduce the distinction between content words and morphemes, and should be able to master irregular constructions. Moreover, they should require less knowledge of the language learning task, and rely more of domainindependent learning mechanisms such as discrimination. But despite its limitations, the current version of AMBER has proven itself quite useful in clarifying the incremental nature of language acquisition, and future models promise to further our understanding of this complex process. | Main paper:
introduction:
In this paper, I present a model that attempts to explain the regularities in children's early syntactic development. The model is called AMBER, an acronym for Acquisition Model Based on Error Recovery. As its name implies, AMBER learns language by comparing its own utterances to those of adults and attempting to correct any errors. The model is implemented as an adaptive production system -a formalism well-suited to modeling the incremental nature of human learning. AMEER focuses on issues such as the omission of content words, the occurrence of telegraphic speech, and the order in which function words are mastered. Before considering AMBER in detail, I will first review some major features of child language, and discuss some earlier models of these phenomena.Children do not learn language in an all.or.none fashion. They begin their linguistic careers uttering one word at a time, and slowly evolve through a number of stages, each containing more adult-like speech than the one before. Around the age of one year, the child begins to produce words in isolation, and continues this strategy for some months. At approximately 18 months, the child begins to combine words into meaningful sequences. In order-based languages such as English, the child usually follows the adult order. Initially only pairs of words are produced, but these are followed by three-word and later by four-word utterances. The simple sentences occurring in this stage consist almost entirely of content words, while grammatical morphemes such as tense endings and prepositions are largely absent.During the period from about 24 to 40 months, the child masters the grammatical morphemes which were absent during the previous stage. These "function words" are learned gradually; the time between the initial production of a morpheme and its mastery may be as long as 16 months. Brown (1973) has examined the order in which 14 English morphemes are acquired, finding the order of acquisition to be remarkably consistent across children. In addition, those morphemes with simpler meanings and involved in fewer transformations are learned earlier than more complex ones. These findings place some strong constraints on the learning mechanisms one postulates for morpheme acquisition. Now that we have reviewed some of the major aspects of child language, let us consider the earlier attempts at modeling these phenomena. Computer programs that learn language can be usefully divided into two groups: those which take advantage of semantic feedback, and those which do not. In general, the early work concerned itself with learning grammars in the absence of information about the meaning of sentences. Examples of this approach can be found in Solomonoff (1959) , Feldman (1969) and Homing (1969) . Since children almost certainly have semantic information available to them, I will not focus on their research here. However, much of the early work is interesting in its own right, and some excellent systems along these lines have recently been produced by Berwick (1980) and Wolff (1980) .In the late 1960's, some researchers began to incorporate semantic information into their language learning systems. The majority of the resulting programs showed little concern with the observed phenomena, including Siklossy's ZBIE (1972) , Ktein's AUTOLING (1973), Hedrick's production system model (1976), Anderson's LAS (1977) , and Sembugamoorthy's PLAS (1979) . These systems failed as models of human language acquisition in two major areas. First, they learned language in an all-or.none manner, and much too rapidly to provide useful models of child language. Second, these systems employed conservative learning strategies in the hope of avoiding errors. In contrast, children themselves make many errors in their early constructions, but eventually recover from them.However, a few researchers have attempted to construct plausible models of the child's learning process. For example, Kelley (1967) has described an "hypothesis testing" model that learned successively more complex phrase structure grammars for parsing simple sentences. As new syntactic classes became available, the program rejected its current grammar in favor of a more accurate one. Thus, the model moved from a stage in which individual words were viewed as "things" to the more sophisticated view that "subjects" precede "actions". One drawback of the model was that it could not learn new categories on its own initiative; instead, the author was forced to introduce them manually. Reeker (1976) has described PST, another theory of early syntactic development. This model assumed that children have limited short term memories, so that they store onty portions of an adult sample sentence. The model compared this reduced sentence to an internally generated utterance, and differences between the two were noted. Six types of differences were recognized (missing prefixes, missing suffixes, missing infixes, substitutions, extra words, and transpositions), and each led to an associated alteration of the grammar. PST accounted for children's omission of content words and the gradual increase in utterance length. The limited memory hypothesis also explained the telegraphic nature of early speech, though Reeker did not address the issue of function word acquisition. Overgeneralizations did occur in PST, but the model could revise its grammar upon their discovery, so as to avoid similar errors in the future. PST also helped account for the incremental nature of language acquisition, since differences were addressed one at a time and the grammar changed only slowly. Selfridge (1981) has described CHILD, another program that attempted to explain some of the basic phenomena of first language acquisition. This system began by learning the meanings of words in terms of a conceptual dependency representation. Word meanings were initially overly specific, but were generalized as more examples were encountered. As more words were learned and their definitions became less restrictive, the length of CHILD'S utterances increased. CHILD differed from other models of language learning by incorporating, a nonlinguistic component. This enabled the system to correctly respond to adult sentences such as Put the ba/I in the box, and led to the appearance that the system understood language before it could produce it. Of course, this strategy sometimes led to errors in comprehension. Coupled with the disapproval of a tutor, such errors were one of the major spurs to the learning of word orders. Syntactic knowledge was stored with the meanings of words, so that the acquisition of syntax necessarily occurred after the acquisition of individual words.Although tl~ese systems fare much better as psychological models than other language learning programs, they have some important limitations. We have seen that Kelley's system required syntactic classes to be introduced by hand, making his explanation less than satisfactory. Selfridge's CHILD was much more robust than Kelley's program, and was unique in modeling children's use of nonlinguistic cues for understanding. However, CHILD'S explanation for the omission of content words -that those words are not yet known -was implausible, since children often omit words that they have used in previous utterances. Reeker's PST explained this phenomenon through a limited memory hypothesis, which is consistent with our knowledge of children's memory skills. Still, PST included no model of the process through which memory improved; in order to simulate the acquisition of longer constructions, Reeker would have had to increase the system's memory size by hand. Both CHILD and PST learned relatively slowly, and made mistakes of the general type observed with children. Both systems addressed the issue of error recovery, starting off as abominable language users, but getting progressively better with time. This is a promising approach that I' attempt to develop it in its extreme form in the following pages.
an overview of amber:
Although Reeker's PST and Selfridge's CHILD address the transition from one-word to multi-word utterances, we have seen that problems exist with both accounts. Neither of these programs focus on the acquisition of function words, their explanations of content word omissions leave something to be desired, and though they learn more slowly than other systems, they still learn more rapidly than children. In response to these limitations, the goals of the current research are:• Account for the omission of content" words, and the eventual recovery from such omissions. • Account for the omission of function words, and the order in which these morphemes are mastered.• Account for the gradual nature of both these linguistic developments. In this section I provide an overview of AMBER, a model that provides one set of answers to these questions. Since more is known about children's utterances than their ability to understand the utterances of others, AMBER models the learning of generation strategies, rather than strategies for understanding language.Selfridge's and Reeker's models differ from other language learning systems in their concern with the problem of recovering from errors. The current research extends this idea even further, since all of AMBER'S learning strategies operate through a process of error recovery. 1 The model is presented with three pieces of information: a legal sentence, an event to be described, and a main goal or topic of the sentence. An event is represented as a semantic network, using relations like agent, action, object, size, color, and type. The specification of one of the nodes as the main topic allows the system to restate the network as a tree structure, and it is from this tree that AMBER generates a sentence. If this sentence is identical to the sample sentence, no learning is required. If a disagreement between the two sentences is found, AMBER modifies its set of rules in an attempt to avoid similar errors in the future, and the system moves on to the next example.AMBER'S performance system is stated as a set of conditionaction rules or productions that operate upon the goal tree to produce utterances. Although the model starts with the potential for producing (unordered) telegraphic sentences, it can initially generate only one word at a time. To see why this occurs, we must consider the three productions that make up AMBER'S initial performance system. The first rule (the start rul~) is responsible for establishing subgoals; it may be paraphrased as:If you want to describe node1, and node2 is in relation to node1, then describe node2.Matching first against the main goal node, this rule selects one of the nodes below it in the tree and creates a subgoal to describe that node. This rule continues to establish lower level goals until a terminal node is reached. At this point, a second production (the speak rule) is matched; this rule may be stated:If you want to describe a conceptt and word is the word for concept, then say word and note that concept has been described.This production retrieves the word for the concept AMBER wants to describe, actually says this word, and marks the terminal goal as satisfied. Once this has been done, the third and final performance production becomes true. This rule matches whenever a subgoal has been satisfied, and attempts to mark the supergoal as satisfied; it may be paraphrased as:If you want to describe node1, and node2 is in re/ation to nodel, and node2 has already been described, then note that node1 has been described.Since the stop rule is stronger 3 than the start rule (which would like to create another subgoal), it moves back up the tree, marking each of the active goals as satisfied (including the main goal). As a result, AMBER believes it has successfully described an event after it has uttered only a single word. Thus, although the model starts with the potential for producing multi.word utterances, it must learn additional rules (and make them stronger than the stop rule) before it can generate multiple content words in the correct order.In general, AMBER learns by comparing adult sentences to the sentences it would produce in the same situations. These predictions reveal two types of mistakes -errors of omission and errors of commission. These errors are detected by additional/earning productions that are responsible for creating new performance rules. Thus, AMBER is an example of what Waterman (1975) has called an adaptive production system, which modifies its own behavior by inserting new conditionaction rules. Below I discuss AMBER'S response to errors of omission, since these are the first to occur and thus lead to the system's first steps beyond the one-word stage. I consider the omission of content words first, and then the omission of grammatical morphemes. Finally, I discuss the importance of errors of commission in discovering conditions on the production of morphemes.
learning preferences and orders:
AMBER'S initial self-modifications result from tile failure to predict content words. Given its initial ability to say one word at a time, the system can make two types of content word omissions -it can fail to predict a word before a correctly predicted one, or it can omit a word after a correctly predicted one. Rather different rules are created in each case. For example, imagine that Daddy is bouncing a ball, and suppose that AMBEa predicted only the word "ball", while hearing the sentence "Daddy is bounce ing the ball". In this case, one of the system's learning rules would note the omitted content word 3The notion of strength plays an important role in AMBER'S explanation of language learning. When a new rule is created, it is given a low initial strength, but this is increased whenever that rule is relearned. And since stronger productions are preferred to their weaker competitors, rules that have been learned many times determine behavior."Daddy" before the content word "ball", and an agent production would be created: AGENT If you want to describe event1, and agent1 is the agent of event1, then desc ribe agent1.Although I do not have the space to describe the responsible learning rule in detail, I can say that it matches against situations in which one content word is omitted before another, and that it always constructs new productions with the same form as the agent rule described above. In this case, it would also create a similar rule for describing actions, based on the omitted "bounce". Note that these new productions do not give AMBER the ability to say more than one word at a time. They merely increase the likelihood that the program will describe the agent or action of an event instead of the object.However, as AMBER begins to prefer agents to actions and actions to objects, the probability of the second type of error (omitting a word after a correctly predicted one) increases. For example, suppose that Daddy is again bouncing a ball, and the system says "Daddy" while it hears "Daddy is bounce ing the ball". In this case, a slightly different production is created that is responsible for ordering the creation of goals. Since the agent relation was described but the object was omitted, an agent. object rule is constructed:If you want to describe event1, and agent1 is the agent of event1, and you have described agent1, and object1 is the object of event1, then describe object1.Together with the agent rule shown above, this production lets AMBER produce utterances such as "Daddy ball". Thus, the model provides a simple explanation of why children omit some content words in their early multi-word utterances. Such rules must be constructed many times before they become strong enough to have an effect, but eventually they let the system produce telegraphic sentences containing all relevant content words in the standard order and lacking only grammatical morphemes.
learning suffixes and prefixes:
Once AMBER begins to correctly predict content words, it can learn rules for saying grammatical morphemes as well. As with content words, such rules are created when the system hears a morpheme but fails to predict it in that position. For example, suppose the. program hears the sentence "Daddy ° is bounce ing "the ball", 4 but predicts only "Daddy bounce ball". In this case, the following rule is generated:ING-1 If you have described action1,and action1 is the action of event1, then say ING.Once it has gained sufficient strength, this rule will say the morpheme "ing" after any action word. As stated, the production is overly general and will lead to errors of commission. I consider AMBER'S response to such errors in the following section.4Asterisks represent pauses in the adult sentence. These cues are necessary for AMBER to decide that a morpheme like "is" is a prefix for "bounce" instead of a suffix for "Daddy".The omission of prefixes leads to very similar rules. In the above example, the morpheme "is" was omitted before "bounce", leading to the creation of a prefix rule for producing the missing function word: IS-1 If you want to describe action1, and action I is the action of event1, then say IS.Note that this rule will become true before an action has been described, while the rule ing-I can apply only after the goal to describe the action has been satisfied. AMBER uses such conditions to control the order in which morphemes are produced. Figure 1 shows AMBER'S mean length of utterance as a function of the number of sample sentences (taken in groups of five) seen by the program, b As one would expect, the system starts with an average of around one word per utterance, and the length slowly increases with time. AMBER moves through a two. word and then a three-word stage, until it eventually produces sentences lacking only grammatical morphemes. Finally, the morphemes are included, and adult-like sentences are produced. The incremental nature of the learning curve results from the piecemeal way in which AMBER learns rules for producing sentences, and from the system's reliance on the strengthening process.
recovering from errors of commission:
Errors of commission occur when AMBER predicts a morpheme that does not occur in the adult sentence. These errors result from the overly general prefix and suffix rules that we saw in the last section. In response to such errors, AMBER calls on a discrimination routine in an attempt to generate more conservative productions with additional conditions. ~ Earlier, I considered a rule (is-1) for producing "is" before the action of an event. As stated, this rule would apply in inappropriate situations as well as correct ones. For example, suppose that AMBER learned this rule in the context of the sentence "Daddy is bounce ing the ball". Now suppose the system later uses this rule to predict the same sentence, but that it instead hears the sentence "Daddy was bounce ing the ball".5AMBER iS implemented on a PDP KL. tO in PRISM (Langley and Neches, t981), an adaptive production system language designed for modeling learning phenomena; the run summarized in Figure t took approximately 2 hours of CPU time.At this point, AMBER'S discrimination routine would retrieve the rule responsible for predicting "is" and lowers its strength; it would also retrieve the situation that led to the faulty application, passing this information to the discrimination routine. Comparing the earlier good case to the current bad case, the discrimination mechanism finds only one difference -in the good example, the action node was marked present, while no such marker occurred during the faulty application. The result is a new production that is identical to the original rule, except that an additional condition has been included:If you want to describe action1, and action I is the action of event1, and action1 is in the present, then say IS.This new condition will let the variant rule fire only when the action is marked as occurring in the present. When first created, the is-2 production is too weak to be seriously considered. However, as it is learned again and again, it will eventually come to mask its predecessor. This transition is aided by the weakening of the faulty is-1 rule each time it leads to an error.Once the variant production has gained enough strength to apply, it will produce its own errors of commission. For example, suppose AMBER uses the is-2 rule to predict "The boy s is bounce ing the ball", while the system hears "The boy s are bounce ing the ball". This time the difference is more complicated. The fact that the action had an agent in the good situation is no help, since an agent was present during the faulty firing as well. However, the agent was singular in the first case but not during the second. Accordingly, the discrimination mechanism creates a secondvariant:If you want to describe action1, and action1 is the action of event1, and action1 is in the present, and agent1 is the agent of event1, and agent1 is singular, then say IS.The resulting rule contains two additional conditions, since the learning process was forced to chain through two elements to find a difference.Together, these conditions keep the production from saying the morpheme "is" unless tl~e agent of the current action is singular in number.Note that since the discrimination process must learn these sets of conditions separately, an important prediction results: the more complex the conditions on a morpheme's use, the longer it will take to master.For example, three sets of conditions are required for the "is" rule, while only a single condition is needed for the "ing" production. As a result, the former is mastered after the latter, just as found in children's speech. Table 1 presents the order of acquisition for the six classes of morpheme learned by AMBER, and the order in which the same morphemes were mastered by Brown's children. The number of sample sentences the model required before mastery are also included.6Anderson's ALAS (1981) system uses a very similar process to recover from overly general morpheme rules. AMBER and AL, ~ :~ have much in common, both having grown out of discussions between Anderson and the author. Although there is considerable overlap, ALAS generally accounts for later developments in children's speech than does AMBER. The general trend is very similar for the children and the model, but two pairs of morphemes are switched. For AMEER, the plural construction was mastered before "ing", while in the observed data the reverse was true. However, note that AMBER mastered the progressive construction almost immediately after the plural, so this difference does not seem especially significant. Second, the model mastered the articles "the", "a", and "some" before the construction for past tense. However, Brown has argued that the notions of "definite" and "indefinite" may be more complex than they appear on the surface; thus, AMBER'S representation of these concepts as single features may have oversimplified matters, making articles easier to learn than they are for the child.Thus, the discrimination process provides an elegant explanation for the observed correlation between a morpheme's complexity and its order of acquisition. Observe that if the conditions on a morpheme's application were learned through a process of generalization such as that proposed by Winston (1970) , exactly the opposite prediction would result. Since generalization operates by removing conditions which differ in successive examples, simpler rules would be finalized later than more complex ones. Langley (1982) has discussed the differences between generalization-based and discrimination. based approaches to learning in more detail. Table 1 . Order of morpheme mastery by the child and AMBER.Some readers will have noted the careful crafting of the above examples, so that only one difference occurred in each case. This meant that the relevant conditions were obvious, and the discrimination mechanism was not forced to consider alternate corrections. In order to more closely model the environment in which children learn language, AMBER was presented with randomly generated sentence/meaning pairs. Thus, it was usually impossible to determine the correct discrimination that should be made from a single pair of good and bad situations. AMBER'S response to this situation is to create all possible discriminations, but to give each of the variants a low initial strengtl~. Correct rules, or rules containing at least some correct conditions, are learned more often than rules containing spurious conditions. And since AMBER strengthens a production whenever it is relearned, variants with useful conditions come to be preferred over their competitors. Thus, AMEER may be viewed as carrying out a breadth-first search through the space of possible rules, considering many alternatives at the same time, and selecting the best of these for further attention. Only variants that exceed a certain threshold (generally those with correct conditions) lead to new errors of commission and additional variants. Eventually, this search process leads to the correct rule, even in the presence of many irrelevant features. Figure 2 presents the learning curves for the "ing" morpheme. Since AMEER initially lacks an "ing" rule, errors of commission abound at the outset, but as this production and its variants are strengthened, such errors decrease. In contrast, errors of commission are absent at the beginning, since AMEER lacks an "ing" rule to make false predictions. As the morpheme rule becomes stronger, errors of commission grow to a peak, but they disappear as discrimination takes effect. By the time it has seen 63 sample sentences, the system has mastered the present progressive construction. ,
directions for future research:
In the preceding pages, we have seen that AMEER offers explanations for a number of phenomena observed in children's early speech. These include the omission of content words and morphemes, the gradual manner in which these omissions are overcome, and the order in which grammatical morphemes are mastered.As a psychological model of early syntactic development, AMEER constitutes an improvement over previous language learning programs. However, this does not mean that the model can not be improved, and in this section I outline some directions for future research efforts.One of the criteria by which any scientific theory can be judged is simplicity, and this is one dimension along which AMEER could stand some improvement. In particular, some of AMBER'S learning heuristics for coping with errors of omission incorporate considerable knowledge about the task of learning a language. For example, AMEER knows the form of the rules it will learn for ordering goals and producing morphemes. Another questionable piece of information is the distinction between major and minor meanings that lets AMEER treat content words and morphemes as completely separate entities. One might argue that the child is born with such knowledge, so that any model of language acquisition should include it as well, However, until such innateness is proven, any model that can manage without such information must be considered simlsler, more elegant, and more desirable than a model that requires it to learn a language.In contrast to these domain-apecific heuristics, AMBER'S strategy for dealing with errors of commission incorporates an apparently domain-independent learning mechanism -the discrimination process. This heuristic can be applied to any domain in which overly general rules lead to errors, and can be used on a variety of representations to discover the conditions under which such rules should be selected. In addition to language development, the discrimination process has been applied to concept learning (Anderson, Kline, and Beasely, 1979; Langley, 1982) and strategy acquisition (Brazdil, 1978; Langley, 1982) ~ Langley (1982) has discussed the generality and power of discrimination-based approaches to learning in greater detail. As we shall see below, this heuristic may Provide a more plausible explanation for the learning of word order. Moreover, it opens the way for dealing with some aspects of language acquisition that AMBER has so far ignored -the learning of word/concept links and the mastering of irregular constructions.AMBER learns the order of content words through a two-stage process, first learning to prefer some relations (like agent) over others (like action or object), and then learning the relative orders in which such relations should be described. The adaptive productions responsible for these transitions contain the actual form of the rules that are learned; the particular rules that result are simply instantiations of these general forms. Ideally, future versions of AMBER should draw on more general learning strategies to acquire ordering rules.Let us consider how the discrimination mechanism might be applied to the discovery of such rules. In the existing system, the generation of "ball" without a preceding "Daddy" is viewed as an error of omission. However, it could as easily be viewed as an error of commission in which the goal to describe the object was prematurely satisfied. In this case, one might use discrimination to generate a variant version of the start rule:If you want to describe node1, and node2 is the object of node1, and node3 is the agent of nodel, and you have described node3, then describe node2.This production is similar to the start rule, except that it will set up goals only to describe the object of an event, and then only if the agent has already been described. In fact, this rule is identical to the agent-object rule discussed in an earlier section; the important point is that it is also a special case of the start rule that might be learned through discrimination when the more general rule fires inappropriately. The same process could lead to variants such as the agent rule, which express preferences rather than order information.Rather than starting with knowledge of the forms of rules at the outset, AMBER would be able to determine their form through a more general learning heuristic.The current version of AMSEn relies heavily on the representational distinction between major meanings and mcJulations of those meanings. Unfortunately, some languages express through content wor~s what others express through grammatical morphemes. Future versions of the system should lessen this distinction by using the same representation for both types o[ information. In addition, the model might employ a single production for learning to produce both content words and morphemes; thus, the program would lack the speak rule described earlier, but would construct specific versions of this production for particular words and morphemes. This would also remedy the existing model's inability to learn new connections between words and concepts.Although the resulting rules would probably be overly general, AMBER would be able to recover from the resulting errors by additional use of the discrimination mechanism.The present model also makes a distinction between morphemes that act as prefixes (such as "the") and those that act as suffixes (such as "ing"). Two separate learning rules are responsible for recovering from function word omissions, and although they are very similar, the conditions under which they apply and the resulting morpheme rules are different. Presumably, if a single adaptive production for learning words and morphemes were introduced, it would take over the functions of both the prefix and suffix rules. If this approach can be successfully implemented, then the current reliance on pause information can be abandoned as welt, since the pauses serve only to distinguish suffixes from prefixes. Such a reorganization would considerably simplify the theory, but it would also lead to two complications. First, the resulting system would tend to produce utterances like "Daddy ed" or "the bounce", before it learned the correct conditions on morphemes through discrimination. (This problem is currently avoided by including information about the relation when a morpheme rule is first built, but this requires domain-specific knowledge about the language learning task.) Since children very seldom make such errors, some other mechanism must be found to explain their absence, or the model's ability to account for the observed phenomena will suffer, Second, if pause information (and the ability to take advantage of such information) is removed, the system wilt sometimes decide a prefix is a suffix and vice versa. For example, AMBER might construct a rule to say "ing" before the object of an event is described, rather than after the action has been mentioned. However, such variants would have little effect on the system's overall performance, since they would be weakened if they ever led to deviant utterances, and they would tend to be learned less often than the desired rules in any case. Thus, the strengthening and weakening processes would tend to direct search through the space of rules toward the correct segmentation, even in the absence of pause information.Another of AMBER'S limitations lies in its inability to learn irregular constructions such as "men" and "ate". However, by combining discrimination and the approach to learning word/concept links described above, future implementations should fare much better along this dimension. For example, consider the irregular noun "foot", which forms the plural "feet". Given a mechanism for connecting words and concepts, AMBER might initially form a rule connecting the concept *foot to the word "foot". After gaining sufficient strength, this rule would say "~?'~+" whenever seeing an example of the concept °foot. Upon encountering an occurrence of "feet", the system would note the error of commission and call on discrimination. This would lead to a variant rule that produced "foot" only when a sing/e marker was present. Also, a new rule connecting "foot to "feet" would be created. Eventually, this new rule would also lead to errors of commission, and a variant with a plural condition would come to replace it.Dealing with the rule for producing the plural marker "s" would be somewhat more difficult. Although AMBER might initially learn to say "foot" and "feet" under the correct circumstances, it would eventually learn the general rule for saying "s" after plural agents and objects. This would lead to constructions such as "feet s", which have been observed in children's utterances. The system would have no difficulty in detecting such errors of commission, but the appropriate response is not so clear. Conceivably, AMBER could create variants of the "s" rule which stated that the concept to be described must not be =foot. However, a similar condition would atso have to be included for every situation in which irregular pluralization occurred (deer, man, cow, and so on). Similar difficulties arise with irregular constructions for the past tense.A better solution would have AMBER construct a special rule for each irregular word, which "imagined" that the inflection had already been said. Once these productions became stronger than the %" and "ed" rules, they would prevent the latter's application and bypass the regular constructions in these cases. Overly general constructions like "foot s" constitute a related form of error. Although AMBER would generate such mistakes before the irregular form was mastered, it would not revert to the overgeneral regular construction at a later point, as do many children. The area of irregular constructions is clearly a phenomenon that deserves more attention in the future.
conclusions:
In conclusion, AMBER provides explanations for severat important phenomena observed in children's early speech. The system accounts for the one-word stage and the child's transition to the telegraphic stage. Although AMBER and children eventually learn to produce all relevant content words, both pass through a stage where some are omitted. Because it learns sets of conditions one at a time, the discrimination process explains the order in which grammatical morphemes are mastered. Finally, AMBER learns gradually enough to provide a plausible explanation of the incremental nature of first language acquisition. Thus the system constitutes a significant addition to our knowledge of syntactic development.Of course, AMBER has a number of limitations that should be addressed in future research. Successive versions should be able to learn the connections between words and concepts, should reduce the distinction between content words and morphemes, and should be able to master irregular constructions. Moreover, they should require less knowledge of the language learning task, and rely more of domainindependent learning mechanisms such as discrimination. But despite its limitations, the current version of AMBER has proven itself quite useful in clarifying the incremental nature of language acquisition, and future models promise to further our understanding of this complex process.
Appendix: In spirit, AMBER is very similar to Reeker's model, though they differ in many details. Historically, PST had no impact on the development of AMBER. The initial plans for AMBER arose from discussions with John R..Anderson in the fall of 1979, while I did not become aware of Reeker's work until the fall of 1980.2For the sake of clarity, I will be presenting only English paraphrases of the actual PRISM productions. All variables are italicized; these may match against any symbol, but all occurrences of a variable -" ~'. ~,~atch to the same element.
| null | null | null | null | {
"paperhash": [
"anderson|a_theory_of_language_acquisition_based_on_general_learning_principles",
"wolfp|language_acquisition_and_the_discovery_of_phrase_structure",
"berwick|computational_analogues_of_constraints_on_grammars:_a_model_of_syntactic_acquisition",
"sembugamoorthy|pus,_a_paradigmatic_language_acquisition_system:_an_overview",
"anderson|induction_of_augmented_transition_networks",
"waterman|adaptive_production_systems",
"brown|a_first_language:_the_early_stages",
"winston|learning_structural_descriptions_from_examples",
"feldman|grammatical_complexity_and_inference",
"selfridge|a_computer_model_of_child_language_acquisition",
"siklóssy|natural_language_learning_by_computer"
],
"title": [
"A Theory of Language Acquisition Based on General Learning Principles",
"Language Acquisition and the Discovery of Phrase Structure",
"Computational Analogues of Constraints on Grammars: A Model of Syntactic Acquisition",
"PUS, A Paradigmatic Language Acquisition System: An Overview",
"Induction of Augmented Transition Networks",
"Adaptive Production Systems",
"A First Language: The Early Stages",
"Learning structural descriptions from examples",
"Grammatical complexity and inference",
"A Computer Model of Child Language Acquisition",
"Natural language learning by computer"
],
"abstract": [
"A simulation model is described for the acquisition of the control of syntax in language generation. This model makes use of general learning principles and general principles of cognition. Language generation is modelled as a problem solving process involving principly the decomposition of a lobe-communicated semantic structure into a hierarchy of subunits for generation. The syntax of the language controls this decomposition. It is shown how a sentence and semantic structure can be compared to infer the decomposition that led to the sentence. The learning processes involve generalizing rules to classes of words, learning by discrimination the various contextual constraints on a rule application, and a strength process which monitors a rule's history of success and failure. This system is shown to apply to the learning of noun declensions in Latin, relative clause constructions in French, and verb auxiliary structures in English.",
"A computer program intended as a step towards an empirically adequate theory of first-language acquisition by children is presented. It has been tested on a sample of English transcribed as a sequence of word classes. The structures formed by the program correspond in many cases with recognized structures in English, and there is a significant correspondence between a parsing of the sample by the program and conventional surface-structure analysis. Anomalies in the program's performance are discussed.",
"A principal goal of modern linguistics is to account for the apparently rapid and uniform acquisition of syntactic knowledge, given the relatively impoverished input that evidently serves as the basis for the induction of that knowledge the so-called projection problem. At least since Chomsky, the usual response to the projection problem has been to characterize knowledge of language as a grammar, and then proceed by restricting so severely the class of grammars available for acquisition that the induction task is greatly simplified perhaps trivialized. consistent with our lcnowledge of what language is and of which stages the child passes through in learning it.\" [2, page 218] In particular, ahhough the final psycholinguistic evidence is not yet in, children do not appear to receive negative evidence as a basis for the induction of syntactic rules. That is, they do not receive direct reinforcement for what is no_..~t a syntactically well-formed sentence (see Brown and Hanlon [3] and Newport, Gleitman, and Gleitman [4] for discussion). Á If syntactic acquisition can proceed using just positive examples, then it would seem completely unnecessary to move to any enrichment of the input data that is as yet unsupported by psycholinguistic evidence. 2",
"This paper informally describes the capabilities of a teachable analogy-based language-independent natural language acquisition System known as PUS. Its language comprehension is intended to resemble that of pre-achool children in certain significant aspects. It can be taught, through examples, to understand and acquire the situational aspects (i.e., physical objects, agents, their senaori-motor properties and spatial relations) described in a text. It has no built-in knowledge of English or the domain of discourse. Neither it infers any formal grammar of English.",
"LAS is a program that acquires augmented transition network (ATN) grammars. It requires as data sentences of the language and semantic network representatives of their meaning. In acquiring the ATN grammars, it induces the word classes of the language, the rules of formation for sentences, and the rules mapping sentences onto meaning. The induced ATN grammar can be used both for sentence generation and sentence comprehension. Critical to the performance of the program are assumptions that it makes about the relation between sentence structure and surface structure (the graph deformation condition), about when word classes may be formed and when ATN networks may be merged, and about the structure of noun phrases. These assumptions seem to be good heuristics which are largely true for natural languages although they would not be true for many nonnatural languages. Provided these assumptions are satisfied LAS seems capable of learning any context-free language.",
"Adaptive production systems are defined and used to illustrate adaptive techniques in production system construction. A learning paradigm is described with in the framework of adaptive production systems, and is applied to a simple rote learning task, a nonsense syllable association and discrimination task, and a serial pattern acquisition task. It is shown that with the appropriate production building mechanism, all three tasks can be solved using similar adaptive production system learning techniques.",
"For many years, Roger Brown and his colleagues have studied the developing language of pre-school children--the language that ultimately will permit them to understand themselves and the world around them. This longitudinal research project records the conversational performances of three children, studying both semantic and grammatical aspects of their language development. These core findings are related to recent work in psychology and linguistics--and especially to studies of the acquisition of languages other than English, including Finnish, German, Korean, and Samoan. Roger Brown has written the most exhaustive and searching analysis yet undertaken of the early stages of grammatical constructions and the meanings they convey. The five stages of linguistic development Brown establishes are measured not by chronological age-since children vary greatly in the speed at which their speech develops--but by mean length of utterance. This volume treats the first two stages. Stage I is the threshold of syntax, when children begin to combine words to make sentences. These sentences, Brown shows, are always limited to the same small set of semantic relations: nomination, recurrence, disappearance, attribution, possession, agency, and a few others. Stage II is concerned with the modulations of basic structural meanings--modulations for number, time, aspect, specificity--through the gradual acquisition of grammatical morphemes such as inflections, prepositions, articles, and case markers. Fourteen morphemes are studied in depth and it is shown that the order of their acquisition is almost identical across children and is predicted by their relative semantic and grammaticalcomplexity. It is, ultimately, the intent of this work to focus on the nature and development of knowledge: knowledge concerning grammar and the meanings coded by grammar; knowledge inferred from performance, from sentences and the settings in which they are spoken, and from signs of comprehension or incomprehension of sentences.",
"Massachusetts Institute of Technology. Dept. of Electrical Engineering. Thesis. 1970. Ph.D.",
"The problem of inferring a grammar for a set of symbol strings is considered and a number of new decidability results obtained. Several notions of grammatical complexity and their properties are studied. The question of learning the least complex grammar for a set of strings is investigated leading to a variety of positive and negative results. This work is part of a continuing effort to study the problems of representation and generalization through the grammatical inference question. Appendices A and B and Section 2a.0 are primarily the work of Reder, Sections 2b and 3d of Horning, Section 4 and Appendix C of Gips, and the remainder the responsibility of Feldman.",
"because generation cannot occur until comprehension learning adds words to the dictionary. Length of utterance Inoreases because the number of words available to express a oonoept inoreases during comprension learning. Knowledge of language meaning precedes knowledge of language syntax beoause syntax is indexed under word meanings and hence cannot be learned before the word meaning. Misunderstanding of utterances whose correct interpretation is not the semantically most probable occurs beoause children use their knowledge or probable meanings to augment gape in their understanding. Finally, misunderstanding of utterances whose syntax suggests an interpretation different from) the semantically most likely interpretation occurs beoause knowledge of syntax is learned after knowledge of meaning. This error is made when enough knowledge of aeaning has been acquired to produce an interpretation, but not enough syntax has been learned to produoe a correot interpretation. There are certainly aany other factors in child language acquisition which have not been considered here, but this paper offers support for the seaantically-indexed syntax hypothesis and the comprehension-driven generation hypothesis as components of a complete model of child language acquisition. Further research must address questions posed by the data of more advanced child language acquisition, which could support other methods of indexing syntactic knowledge and other possible relationships between comprehension and generation. ACKNOWLEDGEMENTS Dr. Roger Sohank's assistance in this work was invaluable. Dr. Richard Cullingford has made many valuable suggestions, and Dr. Katharine Nelson has provided many insights into problems of modelling child language learning. Peter Selfridge, Larry Birnbaum, Marie Bienkowskl and Jamie Callan have contributed both ideas and criticisms.",
"Abstract : Learning a natural language is taken as an improvement in a system's ability to express situations in a natural language. This dissertation describes a computer program, called Zbie, written in IPL-V, which accepts the description of situations in a uniform, structured functional language and tries to express these situations in a natural language. Examples are given for German and, mostly, Russian. At run-time, Zbie builds simple memory structures. Patterns and sets are built on the functional language. The translation rules of the patterns and an in-context vocabulary provide the transition to the natural language. Zbie is a cautious learner, and avoids errors by several mechanisms. Zbie is capable of some evolutionary learning."
],
"authors": [
{
"name": [
"John R. Anderson"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"J. Gerard Wolfp"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"R. Berwick"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"V. Sembugamoorthy"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"John R. Anderson"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"D. A. Waterman"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"Roger S. Brown"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"P. Winston"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"J. Feldman",
"J. Gips",
"J. Horning",
"S. Reder"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"M. Selfridge"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"L. Siklóssy"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
}
]
}
],
"arxiv_id": [
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null
],
"s2_corpus_id": [
"6937633",
"2239631",
"14859431",
"23037155",
"38457468",
"528793",
"145113436",
"106617047",
"117870015",
"31823224",
"58822924"
],
"intents": [
[],
[],
[
"methodology"
],
[],
[
"background",
"methodology"
],
[
"background",
"methodology"
],
[
"background"
],
[
"background"
],
[],
[
"background"
],
[
"background"
]
],
"isInfluential": [
false,
false,
false,
false,
true,
false,
false,
false,
false,
false,
false
]
} | - Problem: Understanding the regularities in children's early syntactic development.
- Solution: The hypothesis is that the AMBER model, based on error recovery, can account for the gradual improvement in language acquisition by children, including the omission of content words, the mastering of function words, and the incremental nature of linguistic developments. | 512 | 0.009766 | null | null | null | null | null | null | null | null |
e6473453ea8f6f62134892b520f3ec770c4bdf69 | 29329061 | null | Twenty Years of Reflections | As I was reflecting deeply in front of the statue of Bodhisattva of Gr~ and Wisdom in the University Muset~n, I was startled to see Jane. Having heard from Don that he had asked the old cats to reflect on the 20 years of ACL, Jane had decided that she should drop in on some of them to seek their advice concerning the future of ACL. | {
"name": [
"Joshi, Aravind K."
],
"affiliation": [
null
]
} | null | null | 20th Annual Meeting of the Association for Computational Linguistics | 1982-06-01 | 0 | 0 | null | '~4hat brings you here?" I asked with a grin. Jane thought for a while and said:"Would you tell me, please, which way I ought to take ACL in the future?" "That depends a good deal on where you think it should go ~" I replied."I don't much care where ,", said Jane. "Then it doesn't matter ~c~ way you take it," I said after prolonged reflection. " so long as I take it somewhere," Jane added as an explanation."Oh, you are sure to do that," said I, "if you only parse long enough." Jane felt that this could not be denied, so just to be friendly she decided to ask another question:'"What sort of computational linguists live about here?" "Well ~ in that direction lives Bonnie," I said waving my right paw and waving the other paw, "and in that direction lives Barbara. Visit either you like:they're both mad." "But I don't want to go among mad people," Jane remarked."Oh, you can't help that," said I, "we're all mad here. I ' m mad. You' re mad." "How do you know I'm mad?" said Jane. '~fou must be," said I, "or you wouldn't have come here." Jane didn't think that proved it at all. However, she went on: "And how do you know that you're mad?" "Well, to begin with," said I, "Don is not mad.You grant that?" "I suppose so," said Jane. '"~ell, then," I went on, "Don is not mad and I am not Don.Therefore, I am mad." Jane didn't appear to be satisfied with this bit of catatonic logic (quite distinct from the monotonic logic)."I must go for a walk now and continue reflecting," I said, as I left her, leaving my grin behind. | null | null | null | null | Main paper:
:
'~4hat brings you here?" I asked with a grin. Jane thought for a while and said:"Would you tell me, please, which way I ought to take ACL in the future?" "That depends a good deal on where you think it should go ~" I replied."I don't much care where ,", said Jane. "Then it doesn't matter ~c~ way you take it," I said after prolonged reflection. " so long as I take it somewhere," Jane added as an explanation."Oh, you are sure to do that," said I, "if you only parse long enough." Jane felt that this could not be denied, so just to be friendly she decided to ask another question:'"What sort of computational linguists live about here?" "Well ~ in that direction lives Bonnie," I said waving my right paw and waving the other paw, "and in that direction lives Barbara. Visit either you like:they're both mad." "But I don't want to go among mad people," Jane remarked."Oh, you can't help that," said I, "we're all mad here. I ' m mad. You' re mad." "How do you know I'm mad?" said Jane. '~fou must be," said I, "or you wouldn't have come here." Jane didn't think that proved it at all. However, she went on: "And how do you know that you're mad?" "Well, to begin with," said I, "Don is not mad.You grant that?" "I suppose so," said Jane. '"~ell, then," I went on, "Don is not mad and I am not Don.Therefore, I am mad." Jane didn't appear to be satisfied with this bit of catatonic logic (quite distinct from the monotonic logic)."I must go for a walk now and continue reflecting," I said, as I left her, leaving my grin behind.
Appendix:
| null | null | null | null | {
"paperhash": [],
"title": [],
"abstract": [],
"authors": [],
"arxiv_id": [],
"s2_corpus_id": [],
"intents": [],
"isInfluential": []
} | null | 512 | 0 | null | null | null | null | null | null | null | null |
b001b228f0b1cd8a1c259c7e487a1b7721ccd21e | 13343095 | null | Experience with an Easily Computed Metric for Ranking Alternative Parses | This brief paper, which is itself an extended abstract for a forthcoming paper, describes a metric that can be easily computed during either bottom-up or top-down construction of a parse tree for ranking the desirability of alternative parses. In its simplest form, the metric tends to prefer trees in which constituents are pushed as far down as possible, but by appropriate modification of a constant in the formula other behavior can be obtained also. This paper includes an introduction to the EPISTLE system being developed at IBM Research and a discussion of the results of using this metric with that system. | {
"name": [
"Heidorn, George E."
],
"affiliation": [
null
]
} | null | null | 20th Annual Meeting of the Association for Computational Linguistics | 1982-06-01 | 9 | 32 | null | described a technique for computing a number for each node during the bottom-up construction of a parse tree, such that a node with a smaller number is to be preferred to a node with a larger number covering the same portion of text. At the time, this scheme was used primarily to select among competing noun phrases in queries to a program explanation system. Although it appeared to work well, it was not extensively tested. Recently, as part of our research on the EPISTLE system, this idea has been modified and extended to work over entire sentences and to provide for top-down computation. Also, we have done an analysis of 80 sentences with multiple parses from our data base to evaluate the performance of this metric, and have found that it is producing very good results. This brief paper, which is actually an extended abstract for a forthcoming paper, begins with an introduction to the EPISTLE system, to set the stage for the current application of this metric. Then the metrie's computation is described, followed by a discussion of the results of the 80-sentence analysis. Finally, some comparisons are made to related work by others.In its current form, the EPISTLE system (Miller, Heidorn and Jensen 1981) is intended to do critiquing of a writer's use of English in business correspondence, and can do some amount of grammar and style checking. The central component of the system is a parser for assigning grammatical structures to input sentences. This is done with NLP, a LISP-based natural language processing system which uses augmented phrase structure grammar ~APSG) rules (Heidorn 1975) to specify how text is to be converted into a network of nodes consisting of attribute-value pairs and how such a network can be converted into text. The first process, decoding, is done in a bottom-up, parallel processing fashion, and the inverse process, encoding, is done in a top-down, serial manner. In the current application the network which is constructed is simply a decorated parse tree, rather than a meaning representation.Because EPISTLE must deal with unrestricted input (both in terms of vocabulary and syntactic constructions), we are trying to see how far we can get initially with almost no semantic information.In particular, our information about words is pretty much limited to parts-of-speech that come from an on-line version of a standard dictionary of over 100,000 entries, and the conditions in our 250 decoding rules are based primarily on syntactic cues. We strive for what we call a unique approximate parse for each sentence, a parse that is not necessarily semantically accurate (e.g., prepositional phrase attachments are not always done right) but one which is adequate for the text critiquing tasks, nevertheless.One of the things we do periodically to test the performanee of our parsing component is to run it on a set of 400 actual business letters, consisting of almost 2,300 sentences which range in length up to 63 words, averaging 19 words per sentence. In two recent runs of this data base, the following results were obtained:No. of parses June 1981 Dec. 1981 0 57% 36% 1 31% 41% 2 6% 11% >2 6% 12%The improvement in performance from June to December can be attributed both to writing additional grammar rules and to relaxing overly restrictive conditions in other rules. It can be seen that this not only had the desirable effect of reducing the percentage of no-parse sentences (from 57% to 36%) and increasing the percentage of single-parse sentences (from 31% to 41%), but it also had the undesirable side effect of inerez., •, ing the multiple-parse sentences (from 12% to 23%).Because we expect th!:; ~;';~.ation to continue as we further increase our grammatical coverage, the need for a method of ranking multiple parses in order to select the best one on which to base our grammar and style critiques is acutely felt,The metric can be stated by the following recursive formula:Scorephrase = ~ KMod(Sc°reMod+l) Modswhere the lowest score is considered to be the best. This formula says that the score associated with a phrase is equal to the sum of the scores of the modifying phrases of that phrase adjusted in a particular way, namely that the score of each modifier is increased by 1 and then multiplied by a constant K appropriate for that type of modifier. A phrase with no modifiers, such as an individual word, has a score of 0. This metric is based on a flat view of syntactic structure which says that each phrase consists of a head word and zero or more pre-and post-modifying phrases. (In this view a sentence is just a big verb phrase, with modifiers such as subject, objects, adverbs, and subordinate clauses.)In its simplest form this metric can be considered to be nothing more than the numerical realization of Kimbatl's Principle Number Two (Kimball 1972) : "Terminal symbols optimally associate to the lowest nonterminal node." (Although Kimball calls this principle right association and illustrates it with right-branching examples, it can often apply equally well to left-branching structures.) One way to achieve this simplest form is to use a K of 0.1 for all types of modifiers.An example of the application of the metric in this simplest form is given in Figure 1 . Two parse trees are shown for the sentence, "See the man with the telescope," with a score attached to each node (other than those that are zero). A node marked with an asterisk is the head of its respective phrase. In this form of flat parse tree a prepositional phrase is displayed as a noun phrase with the preposition as an additional premodifier. As an example of the calculation, the score of the PP here is computed as 0.1(0+ 1)+0.1(0+1), because the scores of its modifiers m the ADJ and the PREP m are each 0. Similarly, the score of the NP in the second parse tree is computed as 0.1(0+ 1)+0.1(0.2+ 1), where the 0.2 within it is the score of the PP.It can be seen from the example that in this simplest form the individual digits of the score after the decimal point tell how many modifiers appear at each level in the phrase (as long as there are no more than nine modifiers at any level). The farther down in the parse tree a constituent is pushed, the farther to the right in the final score its contribution will appear. Hence, a deeper structure will tend to have a smaller score than a shallower structure, and, therefore, be preferred. In the example, this is the second tree, with a score of 0.122 vs. 0.23. That is not to say that this would be the semantically correct tree for this sentence in all contexts, but only that if a choice cannot be made on any other grounds, this tree is to be preferred.Applying the metric in its simplest form does not produce the desired result for all grammatical constructions, so that values for K other than 0.1 must be used for some types of modifiers. It basically boils down to a system of rewards and penalties to make the metric reflect preferences determined heuristically. For example, the preference that a potential auxiliary verb is to be used as an auxiliary rather than as a main verb when both parses are possible can be realized by using a K of 0, a reward, when picking up an auxiliary verb. Similarly, a K of 2, a penalty, can be used to increase the score (thereby lessening the preference) when attaching an adverbial phrase as a premodifier in a lower level clause (rather than as a postmodifier in a higher level clause). When semantic information is available, it can be used to select appropriate values for K, too, such as using 100 for an anomalous combination.Straightforward application of the formula given above implies that the computation of the score can be done in a bottom-up fashion, as the modifiers of each phrase are picked up. However, it can also be done in a top-down manner after doing a little bit of algebra on the formula to expand it and regroup the terms. In the EPISTLE application it is the latter approach that is being used. There is actually a set of ten NLP encoding rules that do the computation in a downward traversal of a completed parse tree, determining the appropriate constant to use at each node. The top-down method of computation could be done during top-down parsing of the sort typically used with ATN's, also. To test the performance of the metric in our EPISTLE application, the parse trees of 80 multiple-parse sentences were analyzed to determine if the metric favored what we considered to he the best tree for our purposes. A raw calculation said it was right in 65% of the cases. However, further analysis of those cases where it was wrong showed that in half of them the parse that it favored was one which will not even be produced when we further refine our grammar rules. If we eliminate these from consideration, our success rate increases to 80%. Out of the remaining "failures," more than half are cases where semantic information is required to make the correct choice, and our system simply does not yet have enough such information to deal with these. The others, about 7%, will require further tuning of the constant K in the formula. (In fact, they all seem to involve VP conjunction, for which the metric has not been tuned at all yet.)The analysis just described was based on multiple parses of order 2 through 6. Another analysis was done separately on the double parses (i.e. order 2). The results were similar, but with an adjusted success rate of 85%, and with almost all of the remainder due to the need for more semantic information.It is also of interest to note that significant rightbranching occurred in about 75% of the eases for which the metric selected the best parse. Most of these were situations in which the grammar rules would allow a constituent to be attached at more than one level, but simply pushing it down to the lowest possible level with the metric turned out to produce the best parse.There has not been much in the literature about using numerical scores to rank alternative analyses of segments of text. One notable exception to this is the work at SRI (e.g., Paxton 1975 and Robinson 1975 , 1980 , where factor statements may be attached to an APSG rule to aid in the calculation of a score for a phrase formed by applying the rule. The score of a phrase is intended to express the likelihood that the phrase is a correct interpretation of the input. These scores apparently can be integers in the range 0 to 100 or symbols such as GOOD or POOR. This method of scoring phrases provides more flexibility than the metric of this paper, but also puts more of a burden on the grammar writer.Another place in which scoring played an important role is the syntactic component of the BBN SPEECHLIS system (Bates 1976) , where ,an integer score is assigned to each configuration during the processing of a sentence to reflect the likelihood that the path which terminates on that configuration is correct. The grammar writer must assign weights to each are of the ATN grammar, but the rest of the computation appears to be done by the system, utilizing such information as the number of words in a constituent. Although this scoring mechanism worked very well for its intended purpose, it may not be more generally applicable.A very specialized scoring scheme was used in the JIMMY3 system (Maxwell and Tuggle 1977) , where each parse network is given an integer score calculated by rewarding the finding of the actor, object, modifiers, and prepositional phrases and punishing the ignoring of words and terms. Finally, there is Wilks' counting of dependencies to find the analysis with the greatest semantic density in his Preference Semantics work (eg., Wilks 1975) . Neither of these purports to propose scoring methods that are more generally applicable, either. | null | null | null | null | Main paper:
introduction:
described a technique for computing a number for each node during the bottom-up construction of a parse tree, such that a node with a smaller number is to be preferred to a node with a larger number covering the same portion of text. At the time, this scheme was used primarily to select among competing noun phrases in queries to a program explanation system. Although it appeared to work well, it was not extensively tested. Recently, as part of our research on the EPISTLE system, this idea has been modified and extended to work over entire sentences and to provide for top-down computation. Also, we have done an analysis of 80 sentences with multiple parses from our data base to evaluate the performance of this metric, and have found that it is producing very good results. This brief paper, which is actually an extended abstract for a forthcoming paper, begins with an introduction to the EPISTLE system, to set the stage for the current application of this metric. Then the metrie's computation is described, followed by a discussion of the results of the 80-sentence analysis. Finally, some comparisons are made to related work by others.In its current form, the EPISTLE system (Miller, Heidorn and Jensen 1981) is intended to do critiquing of a writer's use of English in business correspondence, and can do some amount of grammar and style checking. The central component of the system is a parser for assigning grammatical structures to input sentences. This is done with NLP, a LISP-based natural language processing system which uses augmented phrase structure grammar ~APSG) rules (Heidorn 1975) to specify how text is to be converted into a network of nodes consisting of attribute-value pairs and how such a network can be converted into text. The first process, decoding, is done in a bottom-up, parallel processing fashion, and the inverse process, encoding, is done in a top-down, serial manner. In the current application the network which is constructed is simply a decorated parse tree, rather than a meaning representation.Because EPISTLE must deal with unrestricted input (both in terms of vocabulary and syntactic constructions), we are trying to see how far we can get initially with almost no semantic information.In particular, our information about words is pretty much limited to parts-of-speech that come from an on-line version of a standard dictionary of over 100,000 entries, and the conditions in our 250 decoding rules are based primarily on syntactic cues. We strive for what we call a unique approximate parse for each sentence, a parse that is not necessarily semantically accurate (e.g., prepositional phrase attachments are not always done right) but one which is adequate for the text critiquing tasks, nevertheless.One of the things we do periodically to test the performanee of our parsing component is to run it on a set of 400 actual business letters, consisting of almost 2,300 sentences which range in length up to 63 words, averaging 19 words per sentence. In two recent runs of this data base, the following results were obtained:No. of parses June 1981 Dec. 1981 0 57% 36% 1 31% 41% 2 6% 11% >2 6% 12%The improvement in performance from June to December can be attributed both to writing additional grammar rules and to relaxing overly restrictive conditions in other rules. It can be seen that this not only had the desirable effect of reducing the percentage of no-parse sentences (from 57% to 36%) and increasing the percentage of single-parse sentences (from 31% to 41%), but it also had the undesirable side effect of inerez., •, ing the multiple-parse sentences (from 12% to 23%).Because we expect th!:; ~;';~.ation to continue as we further increase our grammatical coverage, the need for a method of ranking multiple parses in order to select the best one on which to base our grammar and style critiques is acutely felt,The metric can be stated by the following recursive formula:Scorephrase = ~ KMod(Sc°reMod+l) Modswhere the lowest score is considered to be the best. This formula says that the score associated with a phrase is equal to the sum of the scores of the modifying phrases of that phrase adjusted in a particular way, namely that the score of each modifier is increased by 1 and then multiplied by a constant K appropriate for that type of modifier. A phrase with no modifiers, such as an individual word, has a score of 0. This metric is based on a flat view of syntactic structure which says that each phrase consists of a head word and zero or more pre-and post-modifying phrases. (In this view a sentence is just a big verb phrase, with modifiers such as subject, objects, adverbs, and subordinate clauses.)In its simplest form this metric can be considered to be nothing more than the numerical realization of Kimbatl's Principle Number Two (Kimball 1972) : "Terminal symbols optimally associate to the lowest nonterminal node." (Although Kimball calls this principle right association and illustrates it with right-branching examples, it can often apply equally well to left-branching structures.) One way to achieve this simplest form is to use a K of 0.1 for all types of modifiers.An example of the application of the metric in this simplest form is given in Figure 1 . Two parse trees are shown for the sentence, "See the man with the telescope," with a score attached to each node (other than those that are zero). A node marked with an asterisk is the head of its respective phrase. In this form of flat parse tree a prepositional phrase is displayed as a noun phrase with the preposition as an additional premodifier. As an example of the calculation, the score of the PP here is computed as 0.1(0+ 1)+0.1(0+1), because the scores of its modifiers m the ADJ and the PREP m are each 0. Similarly, the score of the NP in the second parse tree is computed as 0.1(0+ 1)+0.1(0.2+ 1), where the 0.2 within it is the score of the PP.It can be seen from the example that in this simplest form the individual digits of the score after the decimal point tell how many modifiers appear at each level in the phrase (as long as there are no more than nine modifiers at any level). The farther down in the parse tree a constituent is pushed, the farther to the right in the final score its contribution will appear. Hence, a deeper structure will tend to have a smaller score than a shallower structure, and, therefore, be preferred. In the example, this is the second tree, with a score of 0.122 vs. 0.23. That is not to say that this would be the semantically correct tree for this sentence in all contexts, but only that if a choice cannot be made on any other grounds, this tree is to be preferred.Applying the metric in its simplest form does not produce the desired result for all grammatical constructions, so that values for K other than 0.1 must be used for some types of modifiers. It basically boils down to a system of rewards and penalties to make the metric reflect preferences determined heuristically. For example, the preference that a potential auxiliary verb is to be used as an auxiliary rather than as a main verb when both parses are possible can be realized by using a K of 0, a reward, when picking up an auxiliary verb. Similarly, a K of 2, a penalty, can be used to increase the score (thereby lessening the preference) when attaching an adverbial phrase as a premodifier in a lower level clause (rather than as a postmodifier in a higher level clause). When semantic information is available, it can be used to select appropriate values for K, too, such as using 100 for an anomalous combination.Straightforward application of the formula given above implies that the computation of the score can be done in a bottom-up fashion, as the modifiers of each phrase are picked up. However, it can also be done in a top-down manner after doing a little bit of algebra on the formula to expand it and regroup the terms. In the EPISTLE application it is the latter approach that is being used. There is actually a set of ten NLP encoding rules that do the computation in a downward traversal of a completed parse tree, determining the appropriate constant to use at each node. The top-down method of computation could be done during top-down parsing of the sort typically used with ATN's, also. To test the performance of the metric in our EPISTLE application, the parse trees of 80 multiple-parse sentences were analyzed to determine if the metric favored what we considered to he the best tree for our purposes. A raw calculation said it was right in 65% of the cases. However, further analysis of those cases where it was wrong showed that in half of them the parse that it favored was one which will not even be produced when we further refine our grammar rules. If we eliminate these from consideration, our success rate increases to 80%. Out of the remaining "failures," more than half are cases where semantic information is required to make the correct choice, and our system simply does not yet have enough such information to deal with these. The others, about 7%, will require further tuning of the constant K in the formula. (In fact, they all seem to involve VP conjunction, for which the metric has not been tuned at all yet.)The analysis just described was based on multiple parses of order 2 through 6. Another analysis was done separately on the double parses (i.e. order 2). The results were similar, but with an adjusted success rate of 85%, and with almost all of the remainder due to the need for more semantic information.It is also of interest to note that significant rightbranching occurred in about 75% of the eases for which the metric selected the best parse. Most of these were situations in which the grammar rules would allow a constituent to be attached at more than one level, but simply pushing it down to the lowest possible level with the metric turned out to produce the best parse.There has not been much in the literature about using numerical scores to rank alternative analyses of segments of text. One notable exception to this is the work at SRI (e.g., Paxton 1975 and Robinson 1975 , 1980 , where factor statements may be attached to an APSG rule to aid in the calculation of a score for a phrase formed by applying the rule. The score of a phrase is intended to express the likelihood that the phrase is a correct interpretation of the input. These scores apparently can be integers in the range 0 to 100 or symbols such as GOOD or POOR. This method of scoring phrases provides more flexibility than the metric of this paper, but also puts more of a burden on the grammar writer.Another place in which scoring played an important role is the syntactic component of the BBN SPEECHLIS system (Bates 1976) , where ,an integer score is assigned to each configuration during the processing of a sentence to reflect the likelihood that the path which terminates on that configuration is correct. The grammar writer must assign weights to each are of the ATN grammar, but the rest of the computation appears to be done by the system, utilizing such information as the number of words in a constituent. Although this scoring mechanism worked very well for its intended purpose, it may not be more generally applicable.A very specialized scoring scheme was used in the JIMMY3 system (Maxwell and Tuggle 1977) , where each parse network is given an integer score calculated by rewarding the finding of the actor, object, modifiers, and prepositional phrases and punishing the ignoring of words and terms. Finally, there is Wilks' counting of dependencies to find the analysis with the greatest semantic density in his Preference Semantics work (eg., Wilks 1975) . Neither of these purports to propose scoring methods that are more generally applicable, either.
Appendix:
| null | null | null | null | {
"paperhash": [
"robinson|diagram:_a_grammar_for_dialogues",
"miller|text-critiquing_with_the_epistle_system:_an_author's_aid_to_better_syntax",
"robinson|a_tuneable_performance_grammar",
"heidorn|augmented_phrase_structure_grammars",
"wilks|an_intelligent_analyzer_and_understander_of_english",
"maxwell|towards_a_natural_language_question_answering_facility"
],
"title": [
"DIAGRAM: a grammar for dialogues",
"Text-critiquing with the EPISTLE system: an author's aid to better syntax",
"A Tuneable Performance Grammar",
"Augmented Phrase Structure Grammars",
"An intelligent analyzer and understander of English",
"Towards A Natural Language Question Answering Facility"
],
"abstract": [
"An explanatory overview is given of DIAGRAM, a large and complex grammar used in an artificial intelligence system for interpreting English dialogue. DIAGRAM is an augmented phrase-structure grammar with rule procedures that allow phrases to inherit attributes from their constituents and to acquire attributes from the larger phrases in which they themselves are constituents. These attributes are used to set context-sensitive constraints on the acceptance of an analysis. Constraints can be imposed by conditions on dominance as well as by conditions on constituency. Rule procedures can also assign scores to an analysis to rate it as probable or unlikely. Less likely analyses can be ignored by the procedures that interpret the utterance. For every expression it analyzes, DIAGRAM provides an annotated description of the structure. The annotations supply important information for other parts of the system that interpret the expression in the context of a dialogue.\nMajor design decisions are explained and illustrated. Some contrasts with transformational grammars are pointed out and problems that motivate a plan to use metarules in the future are discussed. (Metarules derive new rules from a set of base rules to achieve the kind of generality previously captured by transformational grammars but without having to perform transformations on syntactic analyses.)",
"The experimental EPISTLE system is ultimately intended to provide office workers with intelligent applications for the processing of natural language text, particularly business correspondence. A variety of possible critiques of textual material are identified in this paper, but the discussion focuses on the system's capability to detect several classes of grammatical errors, such as disagreement in number between the subject and the verb. The system's error-detection performance relies critically on its parsing component which determines the syntactic structure of each sentence and the grammatical functions fulfilled by various phrases. Details of the system's operations are provided, and some of the future critiquing objectives are outlined.",
"Abstract : This paper describes a tuneable performance grammar currently being developed for speech understanding. It shows how attributes of words are defined and propagated to successively larger phrases, how other attributes are acquired, how factors reference them to help the parser choose among competing definitions to interpret the utterance correctly, and how these factors can easily be changed to adapt the grammar to other discourses and contexts. Factors that might be classified as \"syntactic\" are emphasized, but the attributes they reference need not be, and seldom are, purely syntactic.",
"Augmented phrase structure grammars consist of phrase structure rules with embedded conditions and structure-building actions written in a specially developed language. An attribute-value, record-oriented information structure is an integral part of the theory.",
"The paper describes a working analysis and generation program for natural language, which handles paragraph length input. Its core is a system of preferential choice between deep semantic patterns, based on what we call “semantic density.” The system is contrasted: with syntax oriented linguistic approaches, and with theorem proving approaches to the understanding problem.",
"**lli & p h 110 es1I, pr ices t l , \"deep thought\", \"being p o l i t i c s u , \"a book pn sociology\", l l s e t t l n g t h e idea\", e t c . Heretofore, t h i s has been only an obscrvation. Evcn Schank's work, w i t h i t s dccompositions i n t o YTTUWS, ATRANS, and MTKANS, is onlv suegcs t ive of an underlying un i ty , and Jackendoff I s c l a s s i f i c a t i o n of word senses i n t o p o s i t i o n a l , possess iona l , idenf i f i c a t i a n a l , and c i rcumstan t i a l modes remains only a c l a s s i f i ca t inn . This pal-er desc r ibes an approach which a t i l i ~ e s tire s p t i a l metaphor i n cons t ruc t ing econornical d e f i n i t i o n s of H a l l p r ~ r p o s e l words t h d t have previously de f i ed p r e c i s e specif j ca t ion , and a method f o r i n t e r p r e t i n g these words i n context which t r e a t s metdphor not as an anomoly but a s t h e na tu ra l s t a t e of a f f a i r s . The basic idea is t o clef rne words i n t c r n s of very genera l s p a t i a l p red ica tes and then, i n t h e a n a l y s i s of a give2 t e x t , t o ~ c e k a more s p e c i f i c , context-dependent i n t e r p r e t a t i n n , nr kinding, j u s t as a compiler o r i n t c a p r e t e r seeks bindings f o r t h e v a r i a b l e s and procedrire names mentioned i n a program. I n t e r p r e t a t i o n a s Binding: I n yrograrn~r~infr l a n g u a ~ c s , there is normally a f ixed means of ctetermihing bindinqs. e i t h e r by fa l lowing a chain of access modulcn (2: or by consu l t ing an a l i s t o r PUNARG-frozen environment. Van E~nden & Kowalski (8) have presented unothcr outlook. I n a mechanical theorem proving sys te~u , they show h o w Horn c l d r ~ s e s nay be viewed as procedure d e c l a r a t i o n s i n which t h e p o s i t i v e l i t e r a l i s a procedure name, the negat ive l i t e r a l s the procedure ACL Meeting 1977 85 body, and each n e g a t i v e l i t e r a l a c a l l t o another procedure. t i set ef Horn c l a u s e s is a n o n d e t e r s i n i s t i c program, non-de te rmin i s t i c because s e v e r a l tforn c l a u s e s may have t h e same p o s i t i v e l i t e r a l . That is, the procedure name in a procedure c a l l may be bound t o one of s e v e r a l d i f f e r m t p r o c e d u r ~ bodies. Resolut ion is an a t t empt t o bind a procedure name i n f i way t h a t l e a d s t o t h e d e s i r e d r e f u t a t i o n . Put i n another way, we may view the i n f e r c n c e \";\\>Bt1 a s s p e c i f y i n g A a s a p o s s i b l e b inding f o r H. Montague (6,4) developed a v a r i e t y of i n t e n s i o n a l l o g i c as a represen ta t ion f o r n a t u r a l language. I n h i s formalism, i n d i v i d u a l words can be d e f i n e d a s f u n c r i ~ n s expressed i n terms of i n t e n s i o n s , i.e. v a r i a b l e s and procedure names. S y n t s c t i c r e l d t i o n s i n English a r e t r a n q l a t e d i n t o f u n c t i o n a p p l i c a t i o n s i n i n t e n s i o n a l logic. These func t ion a p p l i c a t i o n s bind t h e i n t e n s i o n s t o s p e c i f i c i n t e r p r e t a t i o n s . In this way the meanings ,of indivPdua1 words a r e composed i n t o the meaning of t h e sentence. However, t h e binding mechanism is q u i t e f i x t d , making the f o r n ~ a l i s m insuf f i c i e p t l y f l e x i b l e f o r t h e wliole range of n a t u r a l language. Our approach combines Montagpets wi th t h a t of Van Emden & Kowalski. A s i n Mon taguc 's approach, i n d i v i d u a l words a r e d e f i n e d i n terms of genecal pre r l i ca tes t h a t may be viewed as unbound p r e d i c a t e namea, and. t h e i r b ind ings i n a gjvcn t ex t are determined From s y n t a c t i c a l l y r e l a t e & words. Ilowever, t h e b ind ing mechanism is not f i x e d , but as with Van Bmdcn & Kowalski, i t is a s e a r c h f o r a chain of in fe rence which culnl inates in an express ion invo lv ing t h e gehera l p red ica te . An example is given belod. I n a d d i t i o n , a dynamic o r d e r i n g determined by con tex t is imposed on the axioms i n t h e d a t a base of l e x i c a l and world k ~ i i i w l c d g e , d e f i n i n g an o r d e r i n g on c h a i n s of inference. The, bind ing is chosen which is given by t h e dWL Meeting 1977 86 chain of inference Irighest in t h i s ordering. The Spa t i a l Metaphor: A t tile base of the ~ c x i c o n , o r s e t of axioms, are a m d l l number of pr imit ive not ions w i t h a highly s p a t i a l o r v i sua l flavor. Among these a re * * ~ c a l e ~ ~ o r a p a r t i a l ordering defined by possible changes of s t a t e , the r e l a t i on \"onw which places poin ts on the sca l e , and* %itt1 which among o ther th ings r e l a t e s an e n t i t y t o a point on a scale . Moreover, \"at1' is re la ted t o predic-ation: f o r an e n t i t y t o be at a predicat ion is f o r the e n t i t y t o be one of its arguments, a s i l l u s t r a t e d by the equivalence John is hard a t work 3 John is working hard. Concepts a t higher l eve l s of the Lexicon are defined i l l terms of these basic s p a t i a l conccpts. For example, \"to think of\"' o r \"to have i n mindt8 i s defined as a va r i e ty of \"att1 Time i s a sca le , and an event may be 5 a point on t h a t scale . A s e t may a l so be though t of as a scale and i t s elements a s being points on the scale . Kote tha t t h i s takes ser ious ly tlre visual image one h a s of a s e t as the elements spread out before one. Final ly , \" a l l p ~ r p o s e ' ~ words such a s t h e common adverbs and preposi t ions arc defined in terms of the bas ic concepts l i k e \"scalett , \"ann, and glatw. In t h e ana lys i s of a t ex t , we f i nd in t e rp re t a t ions f o r these basic concepts by f inding chain8 of inference from proper t ies of the arguments of the \"411-purpose\" war-ds to g,ropos i t i o n s involving the basic concepts. Simplified Exanlple: Consider llJolin i s i n pol i t ics\" . S~r>pose atin\" means to be at a poikt on a ?talc. We nrust f ind bindings fo r t h e underlined words. P o l i t i c s i s a s e t of a c t i v i t i e s d i rec ted toward the goal of obtaining dnd using power i n an ort;anization. r\\ seta is q q a scale . The typ ica l a c t i v i t y is on the Scale. For John to be a t such an a c t i v i t y is f o r him t o Ire one of the p a r t i t i p a n t s i n it. MX Meeting 1977 87 I'I~US, f o r John t o be i n p o l i t i c s i s f o r h i m t o engage i n the a c t i v i t i e s t h a t c l ~ a r a c t e r i z e p o l i t i c s . Otliez examples i l l u s t r a t i r ~ g the d iskinct ic-n be.tween \"in\" and I1onft and tlie meaning of that elusive adverb \"even\" will be presented. Signif icance: T h i s work represen t s . an arlvance i n our understanding of how meanings of worcls am, composed i n t o the meanings of l a r g e r s t r e t c h e s of t e x t , and of the e f f e c t of con tex t on i n t e r p r e t a t i o n . Moreover, i t is the r e s u l t of a happy blend of computational o r l o g i c a l tcchnioue with l i n g u i s t i c and psychological ins igh t s . Bibliography 1. Aschi S.X. 'The Metaphor: A Psychological Inqu i ryn i n dl Rcnley, cd. Docun~ents of Ciestal t Psychology, 1961. 2. Bobrow, D. & B. We@reit. \"A Model and Stack Implementation of M i l l t iple .Environmentsf1 CACM, Octohe r 1973, p. 591. 3. Clark, Herbert 11. l1Space, Time, Semantics, and t h e Child'' i n Cogni t ive Develnpment and the Acqr~ i s i t i o n of Langilage. 2973. 4. Hobbs, J. B S. Rosenschein. Making Computational Sense of Montague 's I n tenssonal Logic. Report No. NSO-11, Courant I n s t i t u t e of Mathematical Sciences. December 1976. 5. Jackendoff, Ray. \"Toward an Expl ana to ry Semantic Representa t ionv L i n g u i s t i c Inquiry , Winter 1976. p. 89. 6. Mon tague, Richard. \"The Proper Treatment of Quan t i f i ca t ion i n Ordinary-English11 i n Approaches to Natural Canguage, 1973. 7. Schank, R., N. Goldhun, d. Rieger, & C. Riesbeck. IfInference and Paraphrase by Computer1' JACM, July 1975. P. 309. 8. Van Emden, M.H. & R.A. Kowalski, 'The Semantics of P r e d i c a t e Logic a s a Programming Language\" JAW, October 1976. p. 733. 9. Whorf, Ben jamin. 'The Rela t ion of Habi tual Thought and Heliav i o r t o Languagef1 i n Language, Thought, k Reqlity, 2956."
],
"authors": [
{
"name": [
"Jane J. Robinson"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"L. A. Miller",
"George E. Heidorn",
"Karen Jensen"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"Jane J. Robinson"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"George E. Heidorn"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"Y. Wilks"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"Bill D. Maxwell",
"F. D. Tuggle"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
}
],
"arxiv_id": [
null,
null,
null,
null,
null,
null
],
"s2_corpus_id": [
"17788520",
"17922808",
"58148141",
"2658668",
"5968738",
"36815393"
],
"intents": [
[
"background"
],
[],
[
"background"
],
[
"methodology"
],
[
"methodology"
],
[]
],
"isInfluential": [
false,
false,
false,
false,
false,
false
]
} | - Problem: The paper aims to address the issue of ranking the desirability of alternative parses during the construction of a parse tree, specifically focusing on the EPISTLE system developed at IBM Research.
- Solution: The paper proposes a metric for computing the desirability of alternative parses by assigning scores to phrases based on a recursive formula, with the goal of pushing constituents as far down as possible in the parse tree. | 512 | 0.0625 | null | null | null | null | null | null | null | null |
a88957ed6f8339d65286a07cd727be1a466695f4 | 9564084 | null | Translating {E}nglish Into Logical Form | A scheme for syntax-directed translation that mirrors compositional model-theoretic semantics is discussed. The scheme is the basis for an English translation system called PArR and was used to specify a semantically interesting fragment of English, including such constructs as tense, aspect, modals, and various iexically controlled verb complement structures. PATR was embedded in a question-answering system that replied appropriately to questions requiring the computation of logical entailments. | {
"name": [
"Rosenschein, Stanley J. and",
"Shieber, Stuart M."
],
"affiliation": [
null,
null
]
} | null | null | 20th Annual Meeting of the Association for Computational Linguistics | 1982-06-01 | 17 | 33 | null | When contemporary linguists and philosophers speak of "semantics," they usually mean m0del-theoretic semantics-mathematical devices for associating truth conditions with Sentences. Computational linguists, on the other hand, often use the term "semantics" to denote a phase of processing in which a data structure (e.g., a formula or network) is constructed to represent the meaning of a sentence and serve as input to later phases of processing. {A better name for this process might be "translation" or "traneduction.") Whether one takes "semantics" to be about model theory or translation, the fact remains that natural languages are marked by a wealth of complex constructions--such as tense, aspect, moods, plurals, modality, adverbials, degree terms, and sententiai complemonts--that make semantic specification a complex and challenging endeavor.Computer scientists faced with the problem of managing software complexity have developed strict design disciplines in their programming methodologies. One might speculate that a similar requirement for manageability has led linguists (since Montague, at least) to follow a discipline of strict compositiouality in semantic specification, even though model*theoretic semantics per me does not demand it. Compositionaiity requires that the meaning of a pbrase be a function of the meanings of its immediate constituents, a property that allows the grammar writer to correlate syntax and semantics on a rule-by-rule basis and keep the specification modular. Clearly, the natural analogue to compositionality in the case of translation is syntax-directed translation; it is this analogy that we seek to exploit.We describe a syntax-directed translation scheme that bears a close resemblance to model-theoretic approaches and achieves a level of perspicuity suitable for the development of large and complex grammars by using a declarative format for specifying grammar rules. In our formalism, translation types are associated with the phrasal categories of English in much the way that logical-denotation types are associated Artificial Intelligence Center SRI International 333 Raveoswood Avenue Menlo Park, CA 94025 with phrasal categories in model-theoretic semantics. The translation 'types are classes of data objects rather than abstract denotations, yet they play much the same role in the translation process that denotation types play in formal semantics.In addition to this parallel between logical types and translation types, we have intentionally designed the language in which translation rules are stated to emphasize parallels between the syntaxdirected translation and corresponding model-theoretic interpretation rules found in, say, the GPSG literature [Gazdar, forthcoming] . In the GPSG approach, each syntax rule has an associated semantic rule (typically involving functional application) that specifies how to compose the meaning of a phrase from the meanings of its constituents. In an analogous fashion, we provide for the translation of a phrase to be synthesized from the translations of its immediate constituents according to a local rule, typically involving symbol/c application and~-conversiou.It should be noted in passing that doing translation rather than model theoretic interpretation offers the temptation to abuse the formalism by having the "meaning" (translation) of a phrase depend on syntactic properties of the translations of its constituents--for instance, on the order of conjuncts in a logical expression. There are several points to be made in this regard. First, without severe a priori restrictions on what kinds of objects can be translations (coupled with the associated strong theoretical claims that such restrictions would embody) it seems impossible to prevent such abuses. Second, as in the case of programming languages, it is reasonable to mmume that there would emerge a set of stylistic practices that would govern the actual form of grammars for reasons of manageability and esthetics. Third, it is still an open question whether the model*theoretic program of strong compositiouality will actually succeed. Indeed, whether it succeeds or not is of little concern to the computational linguist, whose systems, in any event, have no direct way of using the sort of abstract model being proposed and whose systems must, iu general, be based on deduction (and hence translation).The rest of the paper discusses our work in more detail. Section II presents the grammar formalism and describes PATR, an implemented parsing and translation system that can accept a grammar in our formalism and uses it to process sentences. Examples of the system's operation, including its application in a simple deductive question-answering system, are found in Section HI. Finally, Section IV describes further extensions of the formalism and the parsing system. Three appendices are included: the first contains sample grammar rules; the second contains meaning postulates (axioms) used by the question-answering system; the third presents a sample dialogue session. Our grammar formalism is beet characterized as n specialized type of augmented context-free grammar° That is, we take a grammar to be a set of context-fres rules that define a language and associate structural descriptions (parse trees) for each sentence in that language in the usual way. Nodes in the parse tree are assumed to have a set of features which may assume binary values (True or False), and there is a distinguished attribute--the "translation'--whoee values range over a potentially infinite set of objects, i.e., the translations of English phrases.Viewed more abstractly, we regard translation as a binary relation between word sequences and logical formulas. The use of a relation is intended to incorporate the fact that many word sequences have several logical forms, while some have none at all. Furthermore, we view this relation as being composed (in the mathematical sense) of four simpler relations corresponding to the conceptual phases of analysis: (1) LEX (lexical analysis), (2) PARSE (parsing), (3) ANNOTATE (assignment of attribute values, syntactic filtering), and (4) TRANSLATE (translation proper, i.e., synthesis of logical form).The domains and ranges of these relations are as follows:Word Sequences -LEX-* Morpheme Sequences -PARSE-* Phrase Structure Trees -ANNOTATE-* Annotated Trees -TRANSLATE-* Logical FormThe relational composition of these four relations is the full translation relation associating word sequences with logical forms. The subphases too are viewed as relations to reflect the inherent nondeterminism of each stage of the process. For example, the sentence =a hat by every designer sent from Paris was felt" is easily seen to be nondeterministic in LEX ('felt'), PARSE (poetnominal modifier attachment), and TRANSLATE (quantifier scoping).It should be emphasized that the correspondence between processing phases and these conceptual phases is loose. The goal of the separation is to make specification of the process perspicous and to allow simple, clean implementations. An actual system could achieve the net effect of the various stages in many ways, and numerous optimizatious could be envisioned that would have the effect of folding back later phases to increase efficiency. Tr,=,:{ couP' [~'] t~'] } lEXICON: If -* John Aano: [Proper(W) ] Truss: { John } TENSE -* &put Trash: { (X x CpastX)) } V-*go Anon: [ -~Trasnitivn(V) ]Trnn: { C~ x Can x)) } Figure 1 : Sample specification of augmented phrase structure grammar propriate to each phase and illustrate how the word sequence "John went" is analyzed by stages as standing in tbe translation relation to "(past (go john))" according to the (trivial) grammar presented in Figure 1 . The kernel relation is extended in a standard fashion to the full LEX relation. For example, "went" is mapped onto the single morpheme sequence (&past go), and "John" is mapped to (john). Thus, by extension, "John went" is transformed to (John &post go) by the lexical analysis phase.Parsing is specified in the usual manner by a context-free grammar. Utilizing the eontext,-free rules presented in the sample system specification shown in Figure 1 , (John 8cpast go) is transformed into the parse tree (S (NP john)C~ (r~rsE tput) Cvso)))Every node in the parse tree has a set of associated features. The purpo6e of ANNOTATE is to relate the bate parse tree to one that has been enhanced with attribute values, filtering out three that do not satisfy stated syntactic restrictions. These restrictions are given as Boolean expressions associated with the context-free rules; a tree is properly annotated only if all the Boolean expressions corresponding to the rules used in the analysis are simultaneously true. Again, using the rules of C The Relation TRANSLATE Logical-form synthesis rules are specified as augments to the context-free grammar. There is a language whose expressions denote translations (syntactic formulas); an expression from this language is attached to each context-free rule and serves to define the composite translation at a node in terms of the translations of its immediate constituents. In the sample sentence, TENSE' and V' {the translations of TENSE and V respectively) would denote the ),-expressions specified in their respective translation rules. VP' {the translation of the VP) is defined to be the value of (SAP (SAP COMP' TENSE') V'), where COMF' is a constant k-expression and SAP is the symbolic-application operator. This works out to be (k X [past (go X))). Finally, the symbolic application of VP' to N'P' yields (past (go John)). (For convenience we shall henceforth use square brackets for SAP and designate (SAP a ~) by a[~].)Before describing the symbolic-application operator in more detail, it is necessary to explain the exact nature of the data objects serving as translations. At one level, it is convenient to think of the translations as X-expressions, since X-expressions are a convenient notation for specifying how fragments of a translation are substituted into their appropriate operator-operand positions in the formula being assembled-especially when the composition rules follow the syntactic structure as encoded in the parse tree. There are several phenomena, however, that require the storage of more information at a node than can be represented in a bare k-expression. Two of the most conspicuous phenonema of this type are quantifier scoping and unbounded dependencies ("gaps").Our approach to quantifier scoping has been to take a version of Cooper's storage technique, originally proposed in the context of model-tbeoretic semantics, [Cooper, forthcoming[ and adapt it to the needs of translation. For the time being, let us take translations to be ordered pairs whose first component (the head) is an expression in the target language, characteristically a k-expression. The second component of the pair is an object called storage, a structured collection of sentential operators that can be applied to a sentence matrix in such a way as to introduce a quantifier and "capture" a free variable occurring in that sentence matrix. 2 For example, the translation of "a happy man" might be < m , (X S (some m (and (man m)(happy m)) S)) >.s Here the head is m (simply a free variable), and storage consists of the X-expression (k S 2in the sample grammar presented in Appendix A, the storage.formlng operation is notated mk.mbd. 3Followlng [Moore, lO80~, a quantified expression is of the form (quauti6er, variable, restriction, body) ...). If the verb phrase "sleeps ~ were to receive the translation < (X X (sleep X)), ~ > (i.e., a unary predicate as head and no storage), then the symbolic application of the verb phrase translation to the noun phrase translation would compose the heads in the usual way and take the "uniou" of the storage yielding < (sleep m), (k S (some m (and (man m)(happy m)) S)) >.We define an operation called ~pull.s," which has the effect of "pulling" the sentence operator out of storage and applying it to the head. There is another pull operation, pull.v, which operates on heads representing unary predicates rather than sentence matrices. When pull.s is applied in our example, it yields < (some m (and (man m)(happy m)) (sleep m)), ~b >, corresponding to the translation of the clause ~a happy man sleeps." Note that in the process the free variable m has been "captured." In model-theoretic semantics this capture would ordinarily be meaningless, although one can complicate the mathematical machinery to achieve the same effect. Since translation is fundamentally a syntactic process, however, this operation is welldefined and quite natural.To handle gaps, we enriched the translations with a third component: a variable corresponding to the gapped position. For example, the translation of the relative clause ".,.[that] the man saw" would be a triple: < (past (see X Y)), Y, (k S (the X (man X) $))>, where the second component, Y, tracks the free variable corresponding to the gap. At the node at which the gap was to be discharged, X-abstraction would occur (as specified in the grammar by the operation "uugap') producing the unary predicate (X Y (past (see X Y))), which would ultimately be applied to the variable corresponding to the head of the noun phrase.It turns out that triples consisting of (head, var, storage) are adequate to serve as translations of a large class of phrases, but that the application operator needs to distinguish two subcases (which we call type A and type B objects). Until now we have been discussing type A objects, whose application rule is given (roughly) as < hal,vat,san>l< hal',vat',san'>[ -~ <(hd hd'),var LI var', sto i3 sto'> where one of vat or vat' must be null. In the ease of type B objects, which are assigned primarily as translations of determiners, the rule is var ,san > [< hd',var',sto' >] = <var, var', hd(hd') U sto U sto'> For example, if the meaning of "every" is every' ~-<(k P (X S (every X (P X) S))), X, ~b> and the meaning of ~man" is man' ----< man, ~, ~ > then the meaning of "every man" is every'[man'] = ( X , ¢, (X S (man X) S)> , as expected.< h d,Nondeterminism enters in two ways. First, since pull opera, tions can be invoked nondeterministically at various nodes in the parse tree (as specified by the grammar), there exists the possibility of computing multiple scopings for a single context-free parse tree. (See Section III.B for an example of this phenomenon.) In addition, the grammar writer can specify explicit nondeterminism by associating several distinct translation rules with a single context-free production. In this case, he can control the application of a translation schema by specifying for each schema a guard, a Boolean combination of features that the nodes analyzed by the production must satisfy in order for the translation schema to be applicable.The techniques presented in Sections H.B and II.C were implemented in a parsing and translation system called PATR which was used as a component in a dialogue system discussed in Section III.B. The input to the system is a sentence, which is preprocessed by a lexical analyzer. Parsing is performed by a simple recursive descent parser, augmented to add annotations to the nodes of the parse tree. Translation is then done in a separate pass over the annotated parse tree. Thus the four conceptual phases are implemented as three actual processing phases. This folding of two phases into one was done purely for reasons of efficiency and has no effect on the actual results obtained by the system. Functions to perform the storage manipulation, gap handling, and the other features of translation presented earlier have all been realized in the translation component of the running system. The next section describes an actual grammar that has been used in conjunction with this translation system.To illustrate the ease with which diverse semantic features could be handled, a grammar was written that defines a semantically interesting fragment of English along with its translation into logical form [Moore, 1981] . The grammar for the fragment illustrated in this dialogue is compact occupying only a few pages, yet it gives both syntax and semantics for modais, tense, aspect, passives, and lexically controlled infinitival complements. (A portion of the grammar is included as Appendix A.) 4 The full test grammar, Io,~ely based on DIAGRAM [Robinson, 1982] but restricted and modified to reflect changes in a~ proach, was the grammar used to specify the translations of the sentences in the sample dialogue of Appendix C.The grammar presented in Appendix A encodes a relation between sentences and expressions in logical form. We now present a sample of this relation, as well as its derivation, with a sample sentence: "Every man persuaded a woman to go." Lexical analysis relates the sample sentence to two morpheme streams: every man &ppi persuade a woman to go 4Since this is just a small portion of the actual grammar selected for expository purposes, many of the phrasal categories and annotations will seem unmotivated and needlessly complex. These categories and annotations m'e utilized elsewhere in the test grammar.*, every man ,~past persuade a woman to go.The first is immediately eliminated because there is no context-free parse for it in the grammar. The second, however, is parsed as [S (SDEC (NP (DETP (DDET (VET every))) C~u CN0m~V (SOUN Cs re,a))))) (Pn~ICar~ (*u~ (TE~E kpaat)) (VPP (V? CV?T (Vpersuado))) (~ (DET? CA a)) (~u (Nnm~ (~vtm CN womm) )))) (INFINITIVE (TO to) CV~ Cv? CWT CV go]While parsing is being done, annotations are added to each node of the parse tree. For instance, the NP -* DETP NOM rule includes the annotation rule AGREE( NP, DETP, Definite ). AGREE is one of a set of macros defined for the convenience of the grammar writer. This particular macro invocation is equivalent to the Boolean expression Definite(NP) ~ Definite(DETP). Since the DETP node itself has the annotation Definite as a result of the preceding annotation process, the NP node now gets the annotation Definite as wello At the bottom level, the Definite annotation was derived from the lexical entry for the word "evesy'. s The whole parse tree receives the following annotation:[S Cb'~O (lqP: Delinite (DETP: DeBnite CDDET: DeBnite (DET: DeBuite eve1"y) ) ) CNOU (stump CNO~ CSm~))))) CPR~ICATE CAU~ CTENSE ~put)) (VPP CVP: Active (VPT: Active, Ttansitlve, Takesln?(V: Active, Transitive, Takesfn[ porsuade) ) )0~' (DET? CA a) ) CNOU C~la'~ C~ml C~ ,,on~))))) CDr~ISZTZ'W (TO to) (vPP (w: Active (VPT: Active Cv: Active sol Finally, the entire annotated parse tree is traversed to assign translations to the nodes through a direct implementation of the process described in Section II.C. (Type A and B objects in the following examples are marked with a prefix 'A:' or 'B:'.) For instance, the VP node covering (persuade a woman to go), has the translation rule VPT'[N'P'][INFINITIVE']. When this is applied to the translations of the node's constituents, we have CA: CA X CA P (~ T (persuade ¥ X (P X)))~[,CA: X2. ~,. C~ S (some X2 Cwomu X2) S))~][cA:(~x C~x))~]which, after the appropriate applications are performed, yields CA: CAP (~Y (persuade YX2 CPX2)))). ~, (A S (some X2 (~-X2) S))~ 5Note that, although the annotation phase was described and is implemented procedurally, the process actually used guarantees that the resulting annotation is ex" "t|y the one specified declaratlve~y by the annotation rules.[o,: (A x (gox))>] = CA: ()/¥ (persuadeTX2 (goX2))). ~b, CA S (some X2 (roman X2) S))~After the past operator has been applied, we have <A: CA T (pant (persumde YX2 (goX2)))). ~b, CA S (some X2 (~znu X2) S)))At this point, the pull operator (pull.v) can be used to bring the quantifier out of storage, yielding 6<A: CA Y (some ~2 (womb ][2) (pant (peramado T~ (go Yg))))).This will ultimately result in "a woman" getting narrow scope. The other alternative is for the quantifier to remain in storage, to be pulled only at the full sentence level, resulting in the other scoping. In Figure 2 , we have added the translations to all the nodes of the parse tree.Nodes with the same translations as their parents were left unmarked. From examination of the S node translations, the original sentence is given the fully-scoped translations (every X2 (man ](2) (some Xi (woman Xi) (paSt (persuade %,9 X! (go Xl))))) and(some XI (vo~ Xl) (every X~2 (nan X2) (pant (persuade X2 Xl (go Xl) ))) )As mentioned in Section I, we were able to demonstrate the semantic capabilities of our language system by assembling a small question-answering system. Our strategy was to first translate English into logical formulas of the type discussed in [Moore, 1981] , which were then postprocessed into a form suitable for a first-order deduction system. 7 (Another possible approach would have been to translate directly into first-order logic, or to develop direct proof procedures for the non-first-order language.) Thus, we were able to integrate all the components into a question-answering system by providing a simple control structure that accepted an input, translated it into logical form, reduced the translation to first-order logic, and then either asserted the translation in the case of declarative sentences or attempted to prove it in the case of interrogatives. (Only yes/no questions have been implemented.)The main point of interest is that our question-answering system was able to handle complex semantic entailments involving tense, modality, and so on--that, moreover, it was not restricted to extensional evMuation in a data base, as with conventional questionanswering systems. For example, our system was able to handle the entailments of sentences like John could not have been persuaded to go.(The transcript of a sample dialogue is included as Appendix C.)6For convenience, when a final constituent o1' a translation is ~ it is often not written. Thus we could have written <A: (k Y (some ...) ...)> in this cue. 7We used a connection graph theorem prover written by Mark Stickel[Stlckel, forthcoming]. | null | null | null | null | Main paper:
i introduction:
When contemporary linguists and philosophers speak of "semantics," they usually mean m0del-theoretic semantics-mathematical devices for associating truth conditions with Sentences. Computational linguists, on the other hand, often use the term "semantics" to denote a phase of processing in which a data structure (e.g., a formula or network) is constructed to represent the meaning of a sentence and serve as input to later phases of processing. {A better name for this process might be "translation" or "traneduction.") Whether one takes "semantics" to be about model theory or translation, the fact remains that natural languages are marked by a wealth of complex constructions--such as tense, aspect, moods, plurals, modality, adverbials, degree terms, and sententiai complemonts--that make semantic specification a complex and challenging endeavor.Computer scientists faced with the problem of managing software complexity have developed strict design disciplines in their programming methodologies. One might speculate that a similar requirement for manageability has led linguists (since Montague, at least) to follow a discipline of strict compositiouality in semantic specification, even though model*theoretic semantics per me does not demand it. Compositionaiity requires that the meaning of a pbrase be a function of the meanings of its immediate constituents, a property that allows the grammar writer to correlate syntax and semantics on a rule-by-rule basis and keep the specification modular. Clearly, the natural analogue to compositionality in the case of translation is syntax-directed translation; it is this analogy that we seek to exploit.We describe a syntax-directed translation scheme that bears a close resemblance to model-theoretic approaches and achieves a level of perspicuity suitable for the development of large and complex grammars by using a declarative format for specifying grammar rules. In our formalism, translation types are associated with the phrasal categories of English in much the way that logical-denotation types are associated Artificial Intelligence Center SRI International 333 Raveoswood Avenue Menlo Park, CA 94025 with phrasal categories in model-theoretic semantics. The translation 'types are classes of data objects rather than abstract denotations, yet they play much the same role in the translation process that denotation types play in formal semantics.In addition to this parallel between logical types and translation types, we have intentionally designed the language in which translation rules are stated to emphasize parallels between the syntaxdirected translation and corresponding model-theoretic interpretation rules found in, say, the GPSG literature [Gazdar, forthcoming] . In the GPSG approach, each syntax rule has an associated semantic rule (typically involving functional application) that specifies how to compose the meaning of a phrase from the meanings of its constituents. In an analogous fashion, we provide for the translation of a phrase to be synthesized from the translations of its immediate constituents according to a local rule, typically involving symbol/c application and~-conversiou.It should be noted in passing that doing translation rather than model theoretic interpretation offers the temptation to abuse the formalism by having the "meaning" (translation) of a phrase depend on syntactic properties of the translations of its constituents--for instance, on the order of conjuncts in a logical expression. There are several points to be made in this regard. First, without severe a priori restrictions on what kinds of objects can be translations (coupled with the associated strong theoretical claims that such restrictions would embody) it seems impossible to prevent such abuses. Second, as in the case of programming languages, it is reasonable to mmume that there would emerge a set of stylistic practices that would govern the actual form of grammars for reasons of manageability and esthetics. Third, it is still an open question whether the model*theoretic program of strong compositiouality will actually succeed. Indeed, whether it succeeds or not is of little concern to the computational linguist, whose systems, in any event, have no direct way of using the sort of abstract model being proposed and whose systems must, iu general, be based on deduction (and hence translation).The rest of the paper discusses our work in more detail. Section II presents the grammar formalism and describes PATR, an implemented parsing and translation system that can accept a grammar in our formalism and uses it to process sentences. Examples of the system's operation, including its application in a simple deductive question-answering system, are found in Section HI. Finally, Section IV describes further extensions of the formalism and the parsing system. Three appendices are included: the first contains sample grammar rules; the second contains meaning postulates (axioms) used by the question-answering system; the third presents a sample dialogue session. Our grammar formalism is beet characterized as n specialized type of augmented context-free grammar° That is, we take a grammar to be a set of context-fres rules that define a language and associate structural descriptions (parse trees) for each sentence in that language in the usual way. Nodes in the parse tree are assumed to have a set of features which may assume binary values (True or False), and there is a distinguished attribute--the "translation'--whoee values range over a potentially infinite set of objects, i.e., the translations of English phrases.Viewed more abstractly, we regard translation as a binary relation between word sequences and logical formulas. The use of a relation is intended to incorporate the fact that many word sequences have several logical forms, while some have none at all. Furthermore, we view this relation as being composed (in the mathematical sense) of four simpler relations corresponding to the conceptual phases of analysis: (1) LEX (lexical analysis), (2) PARSE (parsing), (3) ANNOTATE (assignment of attribute values, syntactic filtering), and (4) TRANSLATE (translation proper, i.e., synthesis of logical form).The domains and ranges of these relations are as follows:Word Sequences -LEX-* Morpheme Sequences -PARSE-* Phrase Structure Trees -ANNOTATE-* Annotated Trees -TRANSLATE-* Logical FormThe relational composition of these four relations is the full translation relation associating word sequences with logical forms. The subphases too are viewed as relations to reflect the inherent nondeterminism of each stage of the process. For example, the sentence =a hat by every designer sent from Paris was felt" is easily seen to be nondeterministic in LEX ('felt'), PARSE (poetnominal modifier attachment), and TRANSLATE (quantifier scoping).It should be emphasized that the correspondence between processing phases and these conceptual phases is loose. The goal of the separation is to make specification of the process perspicous and to allow simple, clean implementations. An actual system could achieve the net effect of the various stages in many ways, and numerous optimizatious could be envisioned that would have the effect of folding back later phases to increase efficiency. Tr,=,:{ couP' [~'] t~'] } lEXICON: If -* John Aano: [Proper(W) ] Truss: { John } TENSE -* &put Trash: { (X x CpastX)) } V-*go Anon: [ -~Trasnitivn(V) ]Trnn: { C~ x Can x)) } Figure 1 : Sample specification of augmented phrase structure grammar propriate to each phase and illustrate how the word sequence "John went" is analyzed by stages as standing in tbe translation relation to "(past (go john))" according to the (trivial) grammar presented in Figure 1 . The kernel relation is extended in a standard fashion to the full LEX relation. For example, "went" is mapped onto the single morpheme sequence (&past go), and "John" is mapped to (john). Thus, by extension, "John went" is transformed to (John &post go) by the lexical analysis phase.Parsing is specified in the usual manner by a context-free grammar. Utilizing the eontext,-free rules presented in the sample system specification shown in Figure 1 , (John 8cpast go) is transformed into the parse tree (S (NP john)C~ (r~rsE tput) Cvso)))Every node in the parse tree has a set of associated features. The purpo6e of ANNOTATE is to relate the bate parse tree to one that has been enhanced with attribute values, filtering out three that do not satisfy stated syntactic restrictions. These restrictions are given as Boolean expressions associated with the context-free rules; a tree is properly annotated only if all the Boolean expressions corresponding to the rules used in the analysis are simultaneously true. Again, using the rules of C The Relation TRANSLATE Logical-form synthesis rules are specified as augments to the context-free grammar. There is a language whose expressions denote translations (syntactic formulas); an expression from this language is attached to each context-free rule and serves to define the composite translation at a node in terms of the translations of its immediate constituents. In the sample sentence, TENSE' and V' {the translations of TENSE and V respectively) would denote the ),-expressions specified in their respective translation rules. VP' {the translation of the VP) is defined to be the value of (SAP (SAP COMP' TENSE') V'), where COMF' is a constant k-expression and SAP is the symbolic-application operator. This works out to be (k X [past (go X))). Finally, the symbolic application of VP' to N'P' yields (past (go John)). (For convenience we shall henceforth use square brackets for SAP and designate (SAP a ~) by a[~].)Before describing the symbolic-application operator in more detail, it is necessary to explain the exact nature of the data objects serving as translations. At one level, it is convenient to think of the translations as X-expressions, since X-expressions are a convenient notation for specifying how fragments of a translation are substituted into their appropriate operator-operand positions in the formula being assembled-especially when the composition rules follow the syntactic structure as encoded in the parse tree. There are several phenomena, however, that require the storage of more information at a node than can be represented in a bare k-expression. Two of the most conspicuous phenonema of this type are quantifier scoping and unbounded dependencies ("gaps").Our approach to quantifier scoping has been to take a version of Cooper's storage technique, originally proposed in the context of model-tbeoretic semantics, [Cooper, forthcoming[ and adapt it to the needs of translation. For the time being, let us take translations to be ordered pairs whose first component (the head) is an expression in the target language, characteristically a k-expression. The second component of the pair is an object called storage, a structured collection of sentential operators that can be applied to a sentence matrix in such a way as to introduce a quantifier and "capture" a free variable occurring in that sentence matrix. 2 For example, the translation of "a happy man" might be < m , (X S (some m (and (man m)(happy m)) S)) >.s Here the head is m (simply a free variable), and storage consists of the X-expression (k S 2in the sample grammar presented in Appendix A, the storage.formlng operation is notated mk.mbd. 3Followlng [Moore, lO80~, a quantified expression is of the form (quauti6er, variable, restriction, body) ...). If the verb phrase "sleeps ~ were to receive the translation < (X X (sleep X)), ~ > (i.e., a unary predicate as head and no storage), then the symbolic application of the verb phrase translation to the noun phrase translation would compose the heads in the usual way and take the "uniou" of the storage yielding < (sleep m), (k S (some m (and (man m)(happy m)) S)) >.We define an operation called ~pull.s," which has the effect of "pulling" the sentence operator out of storage and applying it to the head. There is another pull operation, pull.v, which operates on heads representing unary predicates rather than sentence matrices. When pull.s is applied in our example, it yields < (some m (and (man m)(happy m)) (sleep m)), ~b >, corresponding to the translation of the clause ~a happy man sleeps." Note that in the process the free variable m has been "captured." In model-theoretic semantics this capture would ordinarily be meaningless, although one can complicate the mathematical machinery to achieve the same effect. Since translation is fundamentally a syntactic process, however, this operation is welldefined and quite natural.To handle gaps, we enriched the translations with a third component: a variable corresponding to the gapped position. For example, the translation of the relative clause ".,.[that] the man saw" would be a triple: < (past (see X Y)), Y, (k S (the X (man X) $))>, where the second component, Y, tracks the free variable corresponding to the gap. At the node at which the gap was to be discharged, X-abstraction would occur (as specified in the grammar by the operation "uugap') producing the unary predicate (X Y (past (see X Y))), which would ultimately be applied to the variable corresponding to the head of the noun phrase.It turns out that triples consisting of (head, var, storage) are adequate to serve as translations of a large class of phrases, but that the application operator needs to distinguish two subcases (which we call type A and type B objects). Until now we have been discussing type A objects, whose application rule is given (roughly) as < hal,vat,san>l< hal',vat',san'>[ -~ <(hd hd'),var LI var', sto i3 sto'> where one of vat or vat' must be null. In the ease of type B objects, which are assigned primarily as translations of determiners, the rule is var ,san > [< hd',var',sto' >] = <var, var', hd(hd') U sto U sto'> For example, if the meaning of "every" is every' ~-<(k P (X S (every X (P X) S))), X, ~b> and the meaning of ~man" is man' ----< man, ~, ~ > then the meaning of "every man" is every'[man'] = ( X , ¢, (X S (man X) S)> , as expected.< h d,Nondeterminism enters in two ways. First, since pull opera, tions can be invoked nondeterministically at various nodes in the parse tree (as specified by the grammar), there exists the possibility of computing multiple scopings for a single context-free parse tree. (See Section III.B for an example of this phenomenon.) In addition, the grammar writer can specify explicit nondeterminism by associating several distinct translation rules with a single context-free production. In this case, he can control the application of a translation schema by specifying for each schema a guard, a Boolean combination of features that the nodes analyzed by the production must satisfy in order for the translation schema to be applicable.The techniques presented in Sections H.B and II.C were implemented in a parsing and translation system called PATR which was used as a component in a dialogue system discussed in Section III.B. The input to the system is a sentence, which is preprocessed by a lexical analyzer. Parsing is performed by a simple recursive descent parser, augmented to add annotations to the nodes of the parse tree. Translation is then done in a separate pass over the annotated parse tree. Thus the four conceptual phases are implemented as three actual processing phases. This folding of two phases into one was done purely for reasons of efficiency and has no effect on the actual results obtained by the system. Functions to perform the storage manipulation, gap handling, and the other features of translation presented earlier have all been realized in the translation component of the running system. The next section describes an actual grammar that has been used in conjunction with this translation system.To illustrate the ease with which diverse semantic features could be handled, a grammar was written that defines a semantically interesting fragment of English along with its translation into logical form [Moore, 1981] . The grammar for the fragment illustrated in this dialogue is compact occupying only a few pages, yet it gives both syntax and semantics for modais, tense, aspect, passives, and lexically controlled infinitival complements. (A portion of the grammar is included as Appendix A.) 4 The full test grammar, Io,~ely based on DIAGRAM [Robinson, 1982] but restricted and modified to reflect changes in a~ proach, was the grammar used to specify the translations of the sentences in the sample dialogue of Appendix C.The grammar presented in Appendix A encodes a relation between sentences and expressions in logical form. We now present a sample of this relation, as well as its derivation, with a sample sentence: "Every man persuaded a woman to go." Lexical analysis relates the sample sentence to two morpheme streams: every man &ppi persuade a woman to go 4Since this is just a small portion of the actual grammar selected for expository purposes, many of the phrasal categories and annotations will seem unmotivated and needlessly complex. These categories and annotations m'e utilized elsewhere in the test grammar.*, every man ,~past persuade a woman to go.The first is immediately eliminated because there is no context-free parse for it in the grammar. The second, however, is parsed as [S (SDEC (NP (DETP (DDET (VET every))) C~u CN0m~V (SOUN Cs re,a))))) (Pn~ICar~ (*u~ (TE~E kpaat)) (VPP (V? CV?T (Vpersuado))) (~ (DET? CA a)) (~u (Nnm~ (~vtm CN womm) )))) (INFINITIVE (TO to) CV~ Cv? CWT CV go]While parsing is being done, annotations are added to each node of the parse tree. For instance, the NP -* DETP NOM rule includes the annotation rule AGREE( NP, DETP, Definite ). AGREE is one of a set of macros defined for the convenience of the grammar writer. This particular macro invocation is equivalent to the Boolean expression Definite(NP) ~ Definite(DETP). Since the DETP node itself has the annotation Definite as a result of the preceding annotation process, the NP node now gets the annotation Definite as wello At the bottom level, the Definite annotation was derived from the lexical entry for the word "evesy'. s The whole parse tree receives the following annotation:[S Cb'~O (lqP: Delinite (DETP: DeBnite CDDET: DeBnite (DET: DeBuite eve1"y) ) ) CNOU (stump CNO~ CSm~))))) CPR~ICATE CAU~ CTENSE ~put)) (VPP CVP: Active (VPT: Active, Ttansitlve, Takesln?(V: Active, Transitive, Takesfn[ porsuade) ) )0~' (DET? CA a) ) CNOU C~la'~ C~ml C~ ,,on~))))) CDr~ISZTZ'W (TO to) (vPP (w: Active (VPT: Active Cv: Active sol Finally, the entire annotated parse tree is traversed to assign translations to the nodes through a direct implementation of the process described in Section II.C. (Type A and B objects in the following examples are marked with a prefix 'A:' or 'B:'.) For instance, the VP node covering (persuade a woman to go), has the translation rule VPT'[N'P'][INFINITIVE']. When this is applied to the translations of the node's constituents, we have CA: CA X CA P (~ T (persuade ¥ X (P X)))~[,CA: X2. ~,. C~ S (some X2 Cwomu X2) S))~][cA:(~x C~x))~]which, after the appropriate applications are performed, yields CA: CAP (~Y (persuade YX2 CPX2)))). ~, (A S (some X2 (~-X2) S))~ 5Note that, although the annotation phase was described and is implemented procedurally, the process actually used guarantees that the resulting annotation is ex" "t|y the one specified declaratlve~y by the annotation rules.[o,: (A x (gox))>] = CA: ()/¥ (persuadeTX2 (goX2))). ~b, CA S (some X2 (roman X2) S))~After the past operator has been applied, we have <A: CA T (pant (persumde YX2 (goX2)))). ~b, CA S (some X2 (~znu X2) S)))At this point, the pull operator (pull.v) can be used to bring the quantifier out of storage, yielding 6<A: CA Y (some ~2 (womb ][2) (pant (peramado T~ (go Yg))))).This will ultimately result in "a woman" getting narrow scope. The other alternative is for the quantifier to remain in storage, to be pulled only at the full sentence level, resulting in the other scoping. In Figure 2 , we have added the translations to all the nodes of the parse tree.Nodes with the same translations as their parents were left unmarked. From examination of the S node translations, the original sentence is given the fully-scoped translations (every X2 (man ](2) (some Xi (woman Xi) (paSt (persuade %,9 X! (go Xl))))) and(some XI (vo~ Xl) (every X~2 (nan X2) (pant (persuade X2 Xl (go Xl) ))) )As mentioned in Section I, we were able to demonstrate the semantic capabilities of our language system by assembling a small question-answering system. Our strategy was to first translate English into logical formulas of the type discussed in [Moore, 1981] , which were then postprocessed into a form suitable for a first-order deduction system. 7 (Another possible approach would have been to translate directly into first-order logic, or to develop direct proof procedures for the non-first-order language.) Thus, we were able to integrate all the components into a question-answering system by providing a simple control structure that accepted an input, translated it into logical form, reduced the translation to first-order logic, and then either asserted the translation in the case of declarative sentences or attempted to prove it in the case of interrogatives. (Only yes/no questions have been implemented.)The main point of interest is that our question-answering system was able to handle complex semantic entailments involving tense, modality, and so on--that, moreover, it was not restricted to extensional evMuation in a data base, as with conventional questionanswering systems. For example, our system was able to handle the entailments of sentences like John could not have been persuaded to go.(The transcript of a sample dialogue is included as Appendix C.)6For convenience, when a final constituent o1' a translation is ~ it is often not written. Thus we could have written <A: (k Y (some ...) ...)> in this cue. 7We used a connection graph theorem prover written by Mark Stickel[Stlckel, forthcoming].
Appendix:
| null | null | null | null | {
"paperhash": [
"robinson|diagram:_a_grammar_for_dialogues",
"stickel|a_nonclausal_connection-graph_resolution_theorem-proving_program",
"moore|problems_in_logical_form",
"konolige|capturing_linguistic_generalizations_with_metarules_in_an_annotated_phrase-structure_grammar",
"bear|psg:_a_simple_phrase_structure_parser"
],
"title": [
"DIAGRAM: a grammar for dialogues",
"A Nonclausal Connection-Graph Resolution Theorem-Proving Program",
"Problems in Logical Form",
"Capturing Linguistic Generalizations With Metarules in an Annotated Phrase-Structure Grammar",
"PSG: A Simple Phrase Structure Parser"
],
"abstract": [
"An explanatory overview is given of DIAGRAM, a large and complex grammar used in an artificial intelligence system for interpreting English dialogue. DIAGRAM is an augmented phrase-structure grammar with rule procedures that allow phrases to inherit attributes from their constituents and to acquire attributes from the larger phrases in which they themselves are constituents. These attributes are used to set context-sensitive constraints on the acceptance of an analysis. Constraints can be imposed by conditions on dominance as well as by conditions on constituency. Rule procedures can also assign scores to an analysis to rate it as probable or unlikely. Less likely analyses can be ignored by the procedures that interpret the utterance. For every expression it analyzes, DIAGRAM provides an annotated description of the structure. The annotations supply important information for other parts of the system that interpret the expression in the context of a dialogue.\nMajor design decisions are explained and illustrated. Some contrasts with transformational grammars are pointed out and problems that motivate a plan to use metarules in the future are discussed. (Metarules derive new rules from a set of base rules to achieve the kind of generality previously captured by transformational grammars but without having to perform transformations on syntactic analyses.)",
"A new theorem-proving program, combining the use of non-clausal resolution and connection graphs, is described. The use of nonclausal resolution as the inference system eliminates some of the redundancy and unreadability of clause-based systems. The use of a connection graph restricts the search space and facilitates graph searching for efficient deduction.",
"Abstract : Most current theories of natural-language processing propose that the assimilation of an utterance involves producing an expression or structure that in some sense represents the literal meaning of the utterance. It is often maintained that understanding what an utterance literally means consists in being able to recover such a representation. In philosophy and linguistics this sort of representation is usually said to display the \"logical form\" of an utterance. This paper surveys some of the key problems that arise in defining a system of representation for the logical forms of English sentences and suggests possible approaches to their solution. The author first looks at some general issues relating to the notion of logical form, explaining why it makes sense to define such a notion only for sentences in context, not in isolation, and then discusses the relationship between research on logical form and work on knowledge representation in artificial intelligence. The rest of the paper is devoted to examining specific problems in logical form. These include the following: quantifiers; events, actions and processes; time and space; collective entities and substances; propositional attitudes and modalities; and questions and imperatives.",
"1. I n t r o d u c t i o n Compu ta t i ona l models employed by cu r ren t na tu ra l language unders tand ing systems re ly on p h r a s e s t r u c t u r e rep resen ta t i ons o f syn tax . Whether imp lemen ted as augmented t rans i t i on nets, BNF grammars, anno ta ted phrase-structure grammars, or s imi la r methods, a phrase-structure representation makes the pars ing p rob lem c o m p u t a t l o n a l l y t r a c t a b l e [ 7 ] . H o w e v e r , p h r a s e s t r u c t u r e rep resen ta t i ons have been open to the c r i t i c i s m tha t they do not cap tu re l i ngu i s t i c gene ra l i za t i ons t h a t are easi ly expressed in t r a n s f o r m a t i o n a l g rammars .",
"Programme d'ordinateur utilisant une grammaire syntagmatique pour l'analyse des phrases de l'anglais."
],
"authors": [
{
"name": [
"Jane J. Robinson"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"M. Stickel"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"Robert C. Moore"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"K. Konolige"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"J. Bear",
"L. Karttunen"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
}
],
"arxiv_id": [
null,
null,
null,
null,
null
],
"s2_corpus_id": [
"17788520",
"12328529",
"18655604",
"14179377",
"64207959"
],
"intents": [
[],
[],
[],
[],
[]
],
"isInfluential": [
false,
false,
false,
false,
false
]
} | Problem: The paper discusses a scheme for syntax-directed translation that mirrors compositional model-theoretic semantics.
Solution: The hypothesis is that utilizing a syntax-directed translation scheme resembling model-theoretic approaches can lead to a clear and perspicuous method for developing large and complex grammars, particularly in the context of English translation systems like PArR. | 512 | 0.064453 | null | null | null | null | null | null | null | null |
c865d5cfffd2ab454727297d9fd4e7558c5d05f5 | 18047271 | null | Planning Natural Language Referring Expressions | This paper describes how a language-planning system can produce natural-language referring expressions that satisfy multiple goals. It describes a formal representation for reasoning about several agents' mutual knowledge using possible-worlds semantics and the general organization of a system that uses the formalism to reason about plans combining physical and linguistic actions at different levels of abstraction. It discusses the planning of concept activation actions that are realized by definite referring expressions in the planned utterances, and shows how it is possible to integrate physical actions for communicating intentions with linguistic actions, resulting in plans that include pointing as one of the communicative actions available to the speaker. | {
"name": [
"Appelt, Douglas E."
],
"affiliation": [
null
]
} | null | null | 20th Annual Meeting of the Association for Computational Linguistics | 1982-06-01 | 26 | 14 | null | One of the mo~t important constituent processes of natural-language generation is the production of referring expressions, which occur in almost every utterance. Referring expressions often carry the burden of informing the hearer of propositions as well as referring to objects. Therefore, many phenomena that are observed in dialogues canet.¥_w~eet../-J "-°'~ ""' "-~The author gratefully acknowledges the support for this research provided in part by the Office of Naval Research under contract N0014-80-C-0296 and in part by the National Science Foundation under grant MCS-8115105. not be explained by the simple view that referring expressions are descriptions of the intended referent sufficient to distinguish the referent from other objects in the domain or in focus.Consider the situation (depicted in Figure 1 ) in which two agents, an apprentice and an expert, are cooperating on a common task, such as disassembling an air compressor. Several tools are lying on the workbench, and although the apprentice knows that the objects are there, he may not necessarily know where they are. The expert might say:Use the wheelpuller to remove the flywheel.( 1)while pointing at the wheelpuller. The apprentice may think to himself at this point, "Ah, ha, so that's a wheelpuller," and then proceed to remove the flywheel.What the expert is accomplishing through the utterance of (1) by using the noun phrase "the wheelpuller" cannot be fully explained by treating definite referring expressions simply as descriptions that are uniquely true of some object, even taking focusing [71 [11] into account. The expert uses "the wheelpuller" to refer to an object that in fact uniquely fits the description predicated of it, so this simple analysis is incapable of accounting for the effects the expert intends his utterance to have.If one takes the knowledge and intentions of the speaker and hearer into account, a more accurate account of the speaker's use of the referring expression can be developed. The apprentice does not know what the object is that fits the description "the wheelpuller". The expert knows that the apprentice doesn't know this, and performs the pointing action to guarantee that his intentions will be recognized correctly.The apprentice must recognize what the expert is trying to communicate by pointing --he must realize that pointing is not just a random gesture, but is intended by the speaker to be recognized as a communicative act by the hearer in much the same way as his utterances are recognized as communicative acts. Furthermore, the apprentice must recognize how the pointing act is cw:,'elated with the utterance the expert is producing. Although there is no sped~: deictic reference in the expert's utterance, it is clear that he does not mean the flywheel, since we will assume that the apprentice can determine that the object he is pointing to is a tool. The apprentice realizes that the object the expert is pointing to is the intended referent of "the wheelpuUer," but in the process, he also acquires the information that the expert believes the object he is pointing to is a wheelpuller, and that the exPert has also informed him of that fact.A language-planning system called KAMP (for Knowledge And Modalities Planner} has been developed that can plan utterances similar to example {1) above, coordinate the linguistic actions with physical actions, and know that the utterance it plans will have the intended multiple effects on the hearer. KAMP builds on Cohen and Perrault's idea of planning speech acts [4] , but extends the planning activity down to the level of constructing surface English sentences. A detailed description of the entire KAMP system can be found in [2] . The system has been implemented and tested on examples in a cooperative equipment assembly domain, such as the one in example {1). This paper develops and extends some of the ideas of an early prototype system described in [1] .The reference problems that KAMP addresses are a subset of a more general problem, which, following Cohen [5] will be called 'identification.' Whenever a speaker makes a definite reference, he intends the hearer to identify some object in the world as the referent. Identifying a refer-en~ requires that the agent perform some cognitive activity, such as the simple case of matching the description with what he knows, or in some cases plan to perform perceptual actions that lead to the identification. KAMP simplifies the problem by not considering perceptual actions, and assumes that there is some 'perceptual field' common to the participants in a dialogue, and that the objects that lie within that field are mutually known to the participants, along with the observable properties and relations that hold among them.For example, the speaker and hearer in (1) are assumed to mutually know the size, shape and location of all objects on the workbench. The agents may not know unobservable properties of the objects, such as the fact that a particular tool is a wheelpuller. Similarly, the participants are assumed to be mutually aware of physical actions that take place within their perceptual field, without explicitly performing any perceptual actions. When the expert points at the wheelpuller, the apprentice is simply assumed to know that he is doing it. H. KNOWLEDGE REPRESENTATION KAMP uses an intensional logic to describe facts about the world, including the knowledge of agents. The possibleworlds semantics of this intensional logic is axiomatized in first-order logic as described by Moore [8] . The axiomatization enables KAMP to reason about how the knowledge of both the speaker and the hearer changes as they perform actions.* What it means to identify an object is somewhat problematical.KAMP assumes that identification means that the referring description conjoined with focusing knowledge picks out the same individual in all possible worlds consistent with what the agent knows.Moore's central idea is to axiomatize operators such as Know as relations between possible worlds. For example, if Wo denotes the real world, then Know(John, P) means P is true in every possible world that is consistent with what John knows. This is stated formally in the axiom schema:Vw2 K(A, w,, w2) D T(w2,P).( 1)The predicate T(w,P) means that P is true in possible world w. The predicate K(A,w,,w2) means that w2 is consistent with what A knows in w,.Actions are described by treating possible worlds as state variables, and axiomatizing actions as relations between possible worlds. Thus, R(E, wl, w2) means that world w2 is the result of event E happening in world w2.It is important that a language planning system reason about mutual knowledge while planning referring expressions [31151. Failure to consider the mutual knowledge of the speaker and hearer can lead to the failure of the reference. K.AMP uses an axiomatization of mutual knowledge in terms of relations on possible worlds. An agent's knowledge is described as everything that is true in all possible worlds compatible with his knowledge. The mutual knowledge of two agents A and B is everything that is (2)In (2), T(w, P) means that the object language proposition P is true in possible world w, and K(a, w,, w~) is a predicate that describes the relation between possible worlds that means that w2 is a possible alternative to w, according to a's knowledge. The second axiom needed is:Vz, w,, w2 K(z, w,, w2) D VyK(Kernel(z, y), wl, w~) (3)Axiom (3) states that the possible worlds consistent with any agent's knowledge is a subset of the possible worlds consistent with the kernel of that agent and any other agent.KAMP is a multiple-agent planning system designed around a NOAH-like hierarchical planner [10] . KAMP uses two descriptions of each action available to the planning agent: a complete axiomatization of the action using the possible-worlds approach outlined above, and an action * Notice that the "intersection" of the propositions believed by two agents is represented by the union of possible worlds compatible with their knowledge. summary consisting of a simplified description of the action that serves as a heuristic to aid in proposing plans that are likely to succeed. KAMP forms a plan using the simplified action summaries first, and then verifies the plan using the full axiomatization. Since the possible-worlds axioms lend themselves more efficiently to proving a plan correct than in generating a plan in the first place, such an approach results in a system that is considerably more efficient than one relying on the possible-worlds axioms alone.Because action summaries represent actions in a simplified form, the planner can ignore details of the effects of communicative acts to produce a plan that is likely to work in most circumstances. For example, if a simplified description of the effects of informing states that the hearer knows the proposition, then the planner can reason that a plan to achieve the goal of the hearer knowing P is likely to include the action of informing him that P is true. In the relatively unlikely event that this description is inadequate, this fact will be detected during the verification phase where the more complete description is invoked.The flow of control during KAMP's heuristic plan-generation phase is similar to that of NOAH's. If a goal needs to be satisfied, KAMP searches for actions that can achieve the goal and inserts them into the plan, along with the preconditions, which become new goals to be satisfied. When the entire plan has been expanded to one level of abstraction, then if there is a lower level, all high-level actions that have low-level expansions are expanded.Between each stage of expansion, critics are invoked that examine the plan for global interactions between actions, and make changes in the structure of the plan to avoid the bad effects of the interactions and take advantage of the beneficial ones. Critics play an important role in the planning of referring expressions, and their functions are described more fully in Section IV. Figure 2 . The hierarchy consists of illocntionary acts, surface speech-acts, concept-activation actions, and utterance acts• Illocutionary acts are speech acts such as informing and requesting, which are planned at the highest level without regard for any specific linguistic realization. The next level consists of surface speech-acts, which are abstractions of the actions of uttering particular sentences with particular syntactic structures. At this level the planner starts making commitments to particular choices in syntactic structure, and linguistic knowledge enters the planning process. One surface speech-act can realize one or more illocutionary acts. The next level consists of conceptactivation actions, which entail the planning of descriptions that are mutually believed by the speaker and hearer to refer to objects in the world. This is the level of abstraction at which noun phrases for definite reference are planned. Finally, at the lowest level of abstraction are utterance acts, consisting of the utterance of specific words.Concept-activation actions describe referring at a high enough level of abstraction so that they are not constrained to have purely linguistic realizations. When a conceptactivation action is expanded to a lower level of abstraction, it can result in the planning of a noun phrase within the surface speech-act of which the concept activation is a part, and physical actions such as pointing that also communicate the speaker's intention to refer.KAMP can plan referential definite noun phrases that realize concept-activation actions. (The planning of attributive and indefinite referring expressions has not yet been addressed.) KAMP recognizes the need to plan a concept activation when it is expanding a surface speechact. The surface speech-act is planned with a particular proposition that the hearer has to come to believe the speaker wants him to know or want. It is necessary to include whatever information the hearer needs to recognize what the proposition is, and this leads to the necessity of referring to the particular objects mentioned in the proposition. The planner often reasons that some objects do not need to be referred to at all. For example, in requesting a hearer to remove the pump from the platform in an air-compressor assembly task, if the hearer knows that the pump is attached to the platform and nothing else, it is not necessary to mention the platform, since it is sufficient to say "Remove the pump," for the hearer to recognize the following propomtlon: Want(S, Do(H, Remove(pumpl, platforml))).The planning of a concept-activation action is similar to the planning of an illocutionary act in that the speaker is trying to get the hearer to recognize his intention to perform the act. This means that all that is necessary from a high-level planning point of view is that the speaker perform some action that signals to the hearer that the * For a description of KAMP's formalization of wanting, see Appelt, 12]• speaker wants to refer to the object. This is often done by incorporating a mutually believed description of the object into the utterance, but there is no requirement that the means by which the speaker communicates this intention be linguistic. For example, the speaker could point at an object (almost always a communicative act), or perhaps throw it at the hearer (not so clearly communicative but definitely attention-getting. The hearer has to reason whether there are any communicative intentions behind the act.)Since concept-activation actions are planned during the expansion of surface speech-acts, the actions that realize them must somehow become part of the utterance being planned. Therefore, all concept-activation actions are ex- The following two axiom schemata describe concept activation in KAMP's possible worlds representation: Axiom schema (4) says that when an agent A performs a concept activation for an agent B, he must first want the object C to be active, and as a result of performing it, C becomes active with respect to A and B; Axiom schema (5) says that after agent A performs the action, the two agents A and B mutually know that the action has been performed. The consequence for the planner of axiomatizing concept activation as in (4) and (5) is that the problem of activating a concept now becomes one of getting the hearer to know that the speaker wants a particular concept to be active. This is the role of the intention-communication component in the expansion of the concept activation.EQUATIONKAMP knows about two types of actions that produce knowledge about what concepts a speaker wants to be active. One is an action called describe, which is ultimately expanded into a linguistic description corresponding to the concept the speaker intends to activate, and the other is called point, which is a generalized pointing action. The point action is assumed to directly communicate the intention to activate a concept, thereby avoiding the problem of observing a gesture and deciding whether it is a pointing, or an attempt to scratch an itch.The following schema defines the describe action: VWlW2 R(Do(A, Describe(B, P}), w,, w2) D 3.A (vy D'(y) 3 • = y)) -T(wl, Want(A, Active(A, B, z)))Axiom (6) says that the precondition for an agent to perform an action of describing using a particular description P is that the speaker wants an objee~ to be active if and only if it uniquely fits the description predicated of it. In (6), the symbol P denotes a description consisting of object language predicates that can be applied to the object being described. It could be defined asP ~-Xx.(D,(z) A... A D.(x))where the Di(z) are the individual descriptors that comprise the description. The symbol D* denotes a similar expression, which includes all the descriptors of P conjoined with a set of predicates that describe the focus of thediscourse. An axiom similar to (5) is also needed to assert that the speaker and hearer will mutually know, after the action is performed, that it has taken place. Therefore, if the speaker and hearer mutually know of an object that satisfies P in focus, then they mutually know that the speaker wants it to be active.The pointing action is much simpler because it does not require either the speaker or the hearer to know anything at all about the object.Vwl, w2 R(Do(A, Point(B,X)), w,, w~) D T(w,, Want(A, Active(A, B, X))).According to the above axiom, if an agent points at an object, that implies that he wants the object to be active. As usual, an axiom similar to (5) is required to assert that the agents mutually know the action has been performed. Axioms (4) and (5) work together with (6) and (7) to produce the desired effects. When a speaker utters a description, or points, he communicates his intention to refer. When he performs the concept-activation action by incorporating the surface-linguistic component of his action into a surface speech-act, his intentions are carried out. Because the equivalence of axiom (6) can be used in both directions, if the speaker wants an object to be active, then one can reason that he knows the description predicated of it is true.A major problem facing the planner is deciding when the necessary conditions obtain to be able to take advantage of the interactions between (6) and (7). Since this task involves examining several actions in the plan, it is performed by a critic called the action-subsumption critic.This critic notices when the speaker is informing the hearer * A complete discussion of focusing in KAMP is beyond the scope of this paper. KAMP uses an axiomatization of Sidner's focusing rules Ill]to keep track of focus shifts. of a predication that could be included in the description associated with a concept activation. When such an interaction is noticed, the critic proposes a modification to the plan. If the surface-linguistic component does not insist that the modification is impossible given the grammar, then the action subsumption is carried out.In example (1), for instance, the expert has a high-level plan that includes the performance of two illocutionary acts: requesting that the apprentice remove the pump using a particular tool (call it tool1), and informing the apprentice that tool1 is a wheelpuller. The action subsumption critic notices that in the request the expert is referring to tool1 and also wants to inform the hearer of a property of tool1. Therefore, it proposes combining the property of being a wheelpuller into the description used for referring to tool1 while making the request.This paper has described a formalism for describing the action of referring in a manner that is useful for a generation system based on planning, like KAMP. The central idea is to divide referring into two tasks: an intentioncommunication task and a surface-linguistic task. By so doing, it is possible to axiomatize different actions that communicate a speaker's intention to refer. Thus, the planner is able to produce plans that produce naturallanguage referring expressions, but take the larger context of the speaker's nonlinguistic actions into account as well.KAMP currently plans only simple definite reference. One promising extension of this approach for future research is to extend the active predicate to apply to intensional concepts in addition to the extensional ones now required for definite reference. We hope this will allow for the planning of attributive and indefinite reference as well. KAMP currently does not plan quantified noun phrases, nor can it refer generically, nor can it refer to collections of entities. Much basic research needs to be done to extend KAMP to handle these other cases, but we hope that the formalism outlined here will provide a good base from which to investigate these extensions. | null | null | null | null | Main paper:
i. introduction:
One of the mo~t important constituent processes of natural-language generation is the production of referring expressions, which occur in almost every utterance. Referring expressions often carry the burden of informing the hearer of propositions as well as referring to objects. Therefore, many phenomena that are observed in dialogues canet.¥_w~eet../-J "-°'~ ""' "-~The author gratefully acknowledges the support for this research provided in part by the Office of Naval Research under contract N0014-80-C-0296 and in part by the National Science Foundation under grant MCS-8115105. not be explained by the simple view that referring expressions are descriptions of the intended referent sufficient to distinguish the referent from other objects in the domain or in focus.Consider the situation (depicted in Figure 1 ) in which two agents, an apprentice and an expert, are cooperating on a common task, such as disassembling an air compressor. Several tools are lying on the workbench, and although the apprentice knows that the objects are there, he may not necessarily know where they are. The expert might say:Use the wheelpuller to remove the flywheel.( 1)while pointing at the wheelpuller. The apprentice may think to himself at this point, "Ah, ha, so that's a wheelpuller," and then proceed to remove the flywheel.What the expert is accomplishing through the utterance of (1) by using the noun phrase "the wheelpuller" cannot be fully explained by treating definite referring expressions simply as descriptions that are uniquely true of some object, even taking focusing [71 [11] into account. The expert uses "the wheelpuller" to refer to an object that in fact uniquely fits the description predicated of it, so this simple analysis is incapable of accounting for the effects the expert intends his utterance to have.If one takes the knowledge and intentions of the speaker and hearer into account, a more accurate account of the speaker's use of the referring expression can be developed. The apprentice does not know what the object is that fits the description "the wheelpuller". The expert knows that the apprentice doesn't know this, and performs the pointing action to guarantee that his intentions will be recognized correctly.The apprentice must recognize what the expert is trying to communicate by pointing --he must realize that pointing is not just a random gesture, but is intended by the speaker to be recognized as a communicative act by the hearer in much the same way as his utterances are recognized as communicative acts. Furthermore, the apprentice must recognize how the pointing act is cw:,'elated with the utterance the expert is producing. Although there is no sped~: deictic reference in the expert's utterance, it is clear that he does not mean the flywheel, since we will assume that the apprentice can determine that the object he is pointing to is a tool. The apprentice realizes that the object the expert is pointing to is the intended referent of "the wheelpuUer," but in the process, he also acquires the information that the expert believes the object he is pointing to is a wheelpuller, and that the exPert has also informed him of that fact.A language-planning system called KAMP (for Knowledge And Modalities Planner} has been developed that can plan utterances similar to example {1) above, coordinate the linguistic actions with physical actions, and know that the utterance it plans will have the intended multiple effects on the hearer. KAMP builds on Cohen and Perrault's idea of planning speech acts [4] , but extends the planning activity down to the level of constructing surface English sentences. A detailed description of the entire KAMP system can be found in [2] . The system has been implemented and tested on examples in a cooperative equipment assembly domain, such as the one in example {1). This paper develops and extends some of the ideas of an early prototype system described in [1] .The reference problems that KAMP addresses are a subset of a more general problem, which, following Cohen [5] will be called 'identification.' Whenever a speaker makes a definite reference, he intends the hearer to identify some object in the world as the referent. Identifying a refer-en~ requires that the agent perform some cognitive activity, such as the simple case of matching the description with what he knows, or in some cases plan to perform perceptual actions that lead to the identification. KAMP simplifies the problem by not considering perceptual actions, and assumes that there is some 'perceptual field' common to the participants in a dialogue, and that the objects that lie within that field are mutually known to the participants, along with the observable properties and relations that hold among them.For example, the speaker and hearer in (1) are assumed to mutually know the size, shape and location of all objects on the workbench. The agents may not know unobservable properties of the objects, such as the fact that a particular tool is a wheelpuller. Similarly, the participants are assumed to be mutually aware of physical actions that take place within their perceptual field, without explicitly performing any perceptual actions. When the expert points at the wheelpuller, the apprentice is simply assumed to know that he is doing it. H. KNOWLEDGE REPRESENTATION KAMP uses an intensional logic to describe facts about the world, including the knowledge of agents. The possibleworlds semantics of this intensional logic is axiomatized in first-order logic as described by Moore [8] . The axiomatization enables KAMP to reason about how the knowledge of both the speaker and the hearer changes as they perform actions.* What it means to identify an object is somewhat problematical.KAMP assumes that identification means that the referring description conjoined with focusing knowledge picks out the same individual in all possible worlds consistent with what the agent knows.Moore's central idea is to axiomatize operators such as Know as relations between possible worlds. For example, if Wo denotes the real world, then Know(John, P) means P is true in every possible world that is consistent with what John knows. This is stated formally in the axiom schema:Vw2 K(A, w,, w2) D T(w2,P).( 1)The predicate T(w,P) means that P is true in possible world w. The predicate K(A,w,,w2) means that w2 is consistent with what A knows in w,.Actions are described by treating possible worlds as state variables, and axiomatizing actions as relations between possible worlds. Thus, R(E, wl, w2) means that world w2 is the result of event E happening in world w2.It is important that a language planning system reason about mutual knowledge while planning referring expressions [31151. Failure to consider the mutual knowledge of the speaker and hearer can lead to the failure of the reference. K.AMP uses an axiomatization of mutual knowledge in terms of relations on possible worlds. An agent's knowledge is described as everything that is true in all possible worlds compatible with his knowledge. The mutual knowledge of two agents A and B is everything that is (2)In (2), T(w, P) means that the object language proposition P is true in possible world w, and K(a, w,, w~) is a predicate that describes the relation between possible worlds that means that w2 is a possible alternative to w, according to a's knowledge. The second axiom needed is:Vz, w,, w2 K(z, w,, w2) D VyK(Kernel(z, y), wl, w~) (3)Axiom (3) states that the possible worlds consistent with any agent's knowledge is a subset of the possible worlds consistent with the kernel of that agent and any other agent.KAMP is a multiple-agent planning system designed around a NOAH-like hierarchical planner [10] . KAMP uses two descriptions of each action available to the planning agent: a complete axiomatization of the action using the possible-worlds approach outlined above, and an action * Notice that the "intersection" of the propositions believed by two agents is represented by the union of possible worlds compatible with their knowledge. summary consisting of a simplified description of the action that serves as a heuristic to aid in proposing plans that are likely to succeed. KAMP forms a plan using the simplified action summaries first, and then verifies the plan using the full axiomatization. Since the possible-worlds axioms lend themselves more efficiently to proving a plan correct than in generating a plan in the first place, such an approach results in a system that is considerably more efficient than one relying on the possible-worlds axioms alone.Because action summaries represent actions in a simplified form, the planner can ignore details of the effects of communicative acts to produce a plan that is likely to work in most circumstances. For example, if a simplified description of the effects of informing states that the hearer knows the proposition, then the planner can reason that a plan to achieve the goal of the hearer knowing P is likely to include the action of informing him that P is true. In the relatively unlikely event that this description is inadequate, this fact will be detected during the verification phase where the more complete description is invoked.The flow of control during KAMP's heuristic plan-generation phase is similar to that of NOAH's. If a goal needs to be satisfied, KAMP searches for actions that can achieve the goal and inserts them into the plan, along with the preconditions, which become new goals to be satisfied. When the entire plan has been expanded to one level of abstraction, then if there is a lower level, all high-level actions that have low-level expansions are expanded.Between each stage of expansion, critics are invoked that examine the plan for global interactions between actions, and make changes in the structure of the plan to avoid the bad effects of the interactions and take advantage of the beneficial ones. Critics play an important role in the planning of referring expressions, and their functions are described more fully in Section IV. Figure 2 . The hierarchy consists of illocntionary acts, surface speech-acts, concept-activation actions, and utterance acts• Illocutionary acts are speech acts such as informing and requesting, which are planned at the highest level without regard for any specific linguistic realization. The next level consists of surface speech-acts, which are abstractions of the actions of uttering particular sentences with particular syntactic structures. At this level the planner starts making commitments to particular choices in syntactic structure, and linguistic knowledge enters the planning process. One surface speech-act can realize one or more illocutionary acts. The next level consists of conceptactivation actions, which entail the planning of descriptions that are mutually believed by the speaker and hearer to refer to objects in the world. This is the level of abstraction at which noun phrases for definite reference are planned. Finally, at the lowest level of abstraction are utterance acts, consisting of the utterance of specific words.Concept-activation actions describe referring at a high enough level of abstraction so that they are not constrained to have purely linguistic realizations. When a conceptactivation action is expanded to a lower level of abstraction, it can result in the planning of a noun phrase within the surface speech-act of which the concept activation is a part, and physical actions such as pointing that also communicate the speaker's intention to refer.KAMP can plan referential definite noun phrases that realize concept-activation actions. (The planning of attributive and indefinite referring expressions has not yet been addressed.) KAMP recognizes the need to plan a concept activation when it is expanding a surface speechact. The surface speech-act is planned with a particular proposition that the hearer has to come to believe the speaker wants him to know or want. It is necessary to include whatever information the hearer needs to recognize what the proposition is, and this leads to the necessity of referring to the particular objects mentioned in the proposition. The planner often reasons that some objects do not need to be referred to at all. For example, in requesting a hearer to remove the pump from the platform in an air-compressor assembly task, if the hearer knows that the pump is attached to the platform and nothing else, it is not necessary to mention the platform, since it is sufficient to say "Remove the pump," for the hearer to recognize the following propomtlon: Want(S, Do(H, Remove(pumpl, platforml))).The planning of a concept-activation action is similar to the planning of an illocutionary act in that the speaker is trying to get the hearer to recognize his intention to perform the act. This means that all that is necessary from a high-level planning point of view is that the speaker perform some action that signals to the hearer that the * For a description of KAMP's formalization of wanting, see Appelt, 12]• speaker wants to refer to the object. This is often done by incorporating a mutually believed description of the object into the utterance, but there is no requirement that the means by which the speaker communicates this intention be linguistic. For example, the speaker could point at an object (almost always a communicative act), or perhaps throw it at the hearer (not so clearly communicative but definitely attention-getting. The hearer has to reason whether there are any communicative intentions behind the act.)Since concept-activation actions are planned during the expansion of surface speech-acts, the actions that realize them must somehow become part of the utterance being planned. Therefore, all concept-activation actions are ex- The following two axiom schemata describe concept activation in KAMP's possible worlds representation: Axiom schema (4) says that when an agent A performs a concept activation for an agent B, he must first want the object C to be active, and as a result of performing it, C becomes active with respect to A and B; Axiom schema (5) says that after agent A performs the action, the two agents A and B mutually know that the action has been performed. The consequence for the planner of axiomatizing concept activation as in (4) and (5) is that the problem of activating a concept now becomes one of getting the hearer to know that the speaker wants a particular concept to be active. This is the role of the intention-communication component in the expansion of the concept activation.EQUATIONKAMP knows about two types of actions that produce knowledge about what concepts a speaker wants to be active. One is an action called describe, which is ultimately expanded into a linguistic description corresponding to the concept the speaker intends to activate, and the other is called point, which is a generalized pointing action. The point action is assumed to directly communicate the intention to activate a concept, thereby avoiding the problem of observing a gesture and deciding whether it is a pointing, or an attempt to scratch an itch.The following schema defines the describe action: VWlW2 R(Do(A, Describe(B, P}), w,, w2) D 3.A (vy D'(y) 3 • = y)) -T(wl, Want(A, Active(A, B, z)))Axiom (6) says that the precondition for an agent to perform an action of describing using a particular description P is that the speaker wants an objee~ to be active if and only if it uniquely fits the description predicated of it. In (6), the symbol P denotes a description consisting of object language predicates that can be applied to the object being described. It could be defined asP ~-Xx.(D,(z) A... A D.(x))where the Di(z) are the individual descriptors that comprise the description. The symbol D* denotes a similar expression, which includes all the descriptors of P conjoined with a set of predicates that describe the focus of thediscourse. An axiom similar to (5) is also needed to assert that the speaker and hearer will mutually know, after the action is performed, that it has taken place. Therefore, if the speaker and hearer mutually know of an object that satisfies P in focus, then they mutually know that the speaker wants it to be active.The pointing action is much simpler because it does not require either the speaker or the hearer to know anything at all about the object.Vwl, w2 R(Do(A, Point(B,X)), w,, w~) D T(w,, Want(A, Active(A, B, X))).According to the above axiom, if an agent points at an object, that implies that he wants the object to be active. As usual, an axiom similar to (5) is required to assert that the agents mutually know the action has been performed. Axioms (4) and (5) work together with (6) and (7) to produce the desired effects. When a speaker utters a description, or points, he communicates his intention to refer. When he performs the concept-activation action by incorporating the surface-linguistic component of his action into a surface speech-act, his intentions are carried out. Because the equivalence of axiom (6) can be used in both directions, if the speaker wants an object to be active, then one can reason that he knows the description predicated of it is true.A major problem facing the planner is deciding when the necessary conditions obtain to be able to take advantage of the interactions between (6) and (7). Since this task involves examining several actions in the plan, it is performed by a critic called the action-subsumption critic.This critic notices when the speaker is informing the hearer * A complete discussion of focusing in KAMP is beyond the scope of this paper. KAMP uses an axiomatization of Sidner's focusing rules Ill]to keep track of focus shifts. of a predication that could be included in the description associated with a concept activation. When such an interaction is noticed, the critic proposes a modification to the plan. If the surface-linguistic component does not insist that the modification is impossible given the grammar, then the action subsumption is carried out.In example (1), for instance, the expert has a high-level plan that includes the performance of two illocutionary acts: requesting that the apprentice remove the pump using a particular tool (call it tool1), and informing the apprentice that tool1 is a wheelpuller. The action subsumption critic notices that in the request the expert is referring to tool1 and also wants to inform the hearer of a property of tool1. Therefore, it proposes combining the property of being a wheelpuller into the description used for referring to tool1 while making the request.This paper has described a formalism for describing the action of referring in a manner that is useful for a generation system based on planning, like KAMP. The central idea is to divide referring into two tasks: an intentioncommunication task and a surface-linguistic task. By so doing, it is possible to axiomatize different actions that communicate a speaker's intention to refer. Thus, the planner is able to produce plans that produce naturallanguage referring expressions, but take the larger context of the speaker's nonlinguistic actions into account as well.KAMP currently plans only simple definite reference. One promising extension of this approach for future research is to extend the active predicate to apply to intensional concepts in addition to the extensional ones now required for definite reference. We hope this will allow for the planning of attributive and indefinite reference as well. KAMP currently does not plan quantified noun phrases, nor can it refer generically, nor can it refer to collections of entities. Much basic research needs to be done to extend KAMP to handle these other cases, but we hope that the formalism outlined here will provide a good base from which to investigate these extensions.
Appendix:
| null | null | null | null | {
"paperhash": [
"cocchiarella|situations_and_attitudes.",
"stickel|theory_resolution:_building_in_nonequational_theories",
"nadathur|mutual_beliefs_in_conversational_systems:_their_role_in_referring_expressions",
"grosz|providing_a_unified_account_of_definite_noun_phrases_in_discourse",
"appelt|telegram:_a_grammar_formalism_for_language_planning",
"mann|nigel:_a_systemic_grammar_for_text_generation.",
"cohen|the_need_for_referent_identification_as_a_planned_action",
"appelt|a_planner_for_reasoning_about_knowledge_and_action",
"appelt|problem_solving_applied_to_language_generation",
"cohen|elements_of_a_plan-based_theory_of_speech_acts",
"sidner|towards_a_computational_theory_of_definite_anaphora_comprehension_in_english_discourse",
"grosz|focusing_and_description_in_natural_language_dialogues",
"bruce|interacting_plans",
"olson|from_utterance_to_text:_the_bias_of_language_in_speech_and_writing",
"appelt|planning_natural_language_utterances_to_satisfy_multiple_goals",
"prince|toward_a_taxonomy_of_given-new_information",
"sacerdoti|a_structure_for_plans_and_behavior",
"bruce|belief_systems_and_language_understanding",
"searle|speech_acts:_an_essay_in_the_philosophy_of_language",
"cohen|speech_acts_and_the_recognition_of_shared_plans"
],
"title": [
"Situations and Attitudes.",
"Theory Resolution: Building in Nonequational Theories",
"Mutual Beliefs in Conversational Systems: Their Role in Referring Expressions",
"Providing a Unified Account of Definite Noun Phrases in Discourse",
"TELEGRAM: A Grammar Formalism for Language Planning",
"Nigel: A Systemic Grammar for Text Generation.",
"The Need for Referent Identification as a Planned Action",
"A Planner for Reasoning about Knowledge and Action",
"Problem Solving Applied to Language Generation",
"Elements of a Plan-Based Theory of Speech Acts",
"Towards a computational theory of definite anaphora comprehension in English discourse",
"Focusing and Description in Natural Language Dialogues",
"Interacting plans",
"From Utterance to Text: The Bias of Language in Speech and Writing",
"Planning natural language utterances to satisfy multiple goals",
"Toward a taxonomy of given-new information",
"A Structure for Plans and Behavior",
"Belief systems and language understanding",
"Speech Acts: An Essay in the Philosophy of Language",
"Speech Acts and the Recognition of Shared Plans"
],
"abstract": [
"In this provocative book, Barwise and Perry tackle the slippery subject of \"meaning, \" a subject that has long vexed linguists, language philosophers, and logicians.",
"Theory resolution constitutes a set of complete procedures for building nonequational theories into a resolution theorem-proving program so that axioms of the theory need never be resolved upon. Total theory resolution uses a decision procedure that is capable of determining inconsistency of any set of clauses using predicates in the theory. Partial theory resolution employs a weaker decision procedure that can determine potential inconsistency of a pair of literals. Applications include the building in of both mathematical and special decision procedures, such as for the taxonomic information furnished by a knowledge representation system.",
"Shared knowledge and beliefs affect conversational situations in various ways. One aspect in which they play a role is the choice of referring expressions. It is of interest to analyse this role since a natural language system must be able to decide when it can use a particular referring expression; or alternatively what a particular expression refers to. In this paper we attempt to formally characterise conditions for these. Specifically, we differ with the traditional notion of mutual knowledge and belief, state a conversational conjecture that convinces us to do so, express a weakened notion in a formal system for reasoning about knowledge, and show how this might be used to decide on satisfactory referring expressions. It is desirable to express a weakened notion of mutual belief that parallels that for mutual knowledge; this aspect is currently being investigated.",
"Citation Grosz, Barbara J., Aravind K. Joshi, and Scott Weinstein. 1983. Providing a unified account of definite noun phrases in discourse. In 21st Annual Meeting of the Association for Computational Linguistics: proceedings of the conference : 15-17 June 1983, Massachusetts Institute of Technology, Cambridge, Massachusetts, ed. Association for Computational Linguistics, 44-50. Morristown, N.J.: Association for Computational Linguistics.",
"Planning provides the basis for a theory of language generation that considers the communicative goals of the speaker when producing utterances. One central problem in designing a system based on such a theory is specifying the requisite linguistic knowledge in a form that interfaces well with a planning system and allows for the encoding of discourse information. The TELEGRAM (TELEological GRAMmar) system described in this paper solves this problem by annotating a unification grammar with assertions about how grammatical choices are used to achieve various goals, and by enabling the planner to augment the functional description of an utterance as it is being unified. The control structures of the planner and the grammar unifier are then merged in a manner that makes it possible for general planning to be guided by unification of a particular functional description.",
"Abstract : Programming a computer to write text which meets a prior need is a challenging research task. As part of such research, Nigel, a large programmed grammar of English, has been created in the framework of systemic linguistics begun by Halliday. In addition to specifying function and structures of English, Nigel has a novel semantic stratum which specifies the situations in which each grammatical feature should be used. The report consists of three papers on Nigel: an introductory overview, the script of a demonstration of its use in generation, and an exposition of how Nigel relates to the systemic framework. Although the effort to develop Nigel is significant both as computer science research and as linguistic inquiry the outlook of the report is oriented to its linguistic significance.",
"The paper presents evidence that speakers often attempt to get hearers to identify referents as a separate step in the speaker's plan. Many of the communicative acts performed in service of such referent identification steps can be analyzed by extending a plan-based theory of communication for task-oriented dialogues to include an action representing a hearer's identifying the referent of a description -- an action that is reasoned about in speakers' and hearers' plans. The phenomenon of addressing referent identification as a separate goal is shown to distinguish telephone from teletype task-oriented dialogues and thus has implications for the design of speech-understanding systems.",
"This paper reports recent results of research on planning systems that have the ability to deal with multiple agents and to reason about their knowledge and the actions they perform. The planner uses a knowledge representation based on the possible worlds semantics axiomatization of knowledge, belief and action advocated by Moore [5]. This work has been motivated by the need for such capabilities in natural language processing systems that will plan speech acts and natural language utterances [1, 2]. The sophisticated use of natural language requires reasoning about other agents, what they might do and what they believe, and therefore provides a suitable domain for planning to achieve goals involving belief. This paper does not directly address issues of language per se, but focuses on the problem-solving requirements of a language-using system, and describes a working system, kamp (Knowledge And Modalities Planner), that embodies the ideas reported herein.",
"This research was supported at SRI International by the Defense Advanced Research Projects Agency under contract N00039--79--C--0118 with the Naval Electronic Systems Command. The views and conclusions contained in this document are those of the author and should not be interpreted as representative of the official policies either expressed or implied of the Defense Advanced Research Projects Agency, or the U. S. Government. The author is grateful to Barbara Grosz, Gary Hendrix and Terry Winograd for comments on an earlier draft of this paper.",
"This paper explores the truism that people think about what they say. It proposes hat, to satisfy their own goals, people often plan their speech acts to affect their listeners' beliefs, goals, and emotional states. Such language use can be modelled by viewing speech acts as operators in a planning system, thus allowing both physical and speech acts to be integrated into plans. \n \nMethodological issues of how speech acts should be defined in a plan-based theory are illustrated by defining operators for requesting and informing. Plans containing those operators are presented and comparisons are drawn with Searle's formulation. The operators are shown to be inadequate since they cannot be composed to form questions (requests to inform) and multiparty requests (requests to request). By refining the operator definitions and by identifying some of the side effects of requesting, compositional adequacy is achieved. The solution leads to a metatheoretical principle for modelling speech acts as planning operators.",
"Abstract : This report investigates the process of focussing as a description and explanation of the comprehension of certain anaphoric expressions in English discourse. The investigation centers on the interpretation of definite anaphora, that is, on the personal pronouns, and noun phrases used with a definite article the, this, or that. Focussing is formalized as a process in which a speaker centers attention on a particular aspect of the discourse. An algorithmic description specifies what the speaker can focus on and how the speaker may change the focus of the discourse as the discourse unfolds. The algorithm allows for a simple focussing mechanism to be constructed: an element in focus, an ordered collection of alternate foci, and a stack of old foci. The data structure for the element in focus is a representation which encodes a limited set of associations between it and other elements from the discourse as well as from general knowledge. This report also establishes other constraints which are needed for the successful comprehension of anaphoric expressions. The focussing mechanism is designed to take advantage of syntactic and semantic information encoded as constraints on the choice of anaphora interpretation. These constraints are due to the work of language researchers; and the focussing mechanism provides a principled means for choosing when to apply the constraints in the comprehension process.",
"Abstract : When two people talk, they focus their attention on only a small portion of what each of them knows or believes. Both what is said and how it is interpreted depend on a shared understanding of this narrowing of attention to a small highlighted portion of what is known. Focusing is an active process. As a dialogue progresses, the participants continually shift their focus and thus form an evolving context against which utterances are produced and understood. A speaker provides a hearer with clues of what to look at and how to look at it what to focus on, how to focus on it, and how wide or narrow the focusing should be. As a result, one of the effects of understanding an utterance is that the listener becomes focused on certain entities (both objects and relationships) from a particular perspective. Focusing clues may be linguistic or they may come from knowledge about the relationships between entities in the domain. Linguistic clues may be either explicit, deriving directly from certain words, or implicit, deriving from sentential structure and from rhetorical relationships between sentences. This paper examines the relationship between focusing and definite descriptions in dialogue and its implications for natural language processing systems. It describes focusing mechanisms based on domain- structure clues which have been included in a computer system and, from this perspective, indicates future research problems entailed in modeling the focusing process more generally.",
"The paper presents a notation system for the representation of Interacting plans and applies it in the analysis of a small portion of \"Hansel and Gretel\". The essential problem for the notation system can be stated as follows: How do we represent the plans that determine behavior in a way that explicates Interactions among plans? As the examples Illustrate, the problem is not just to show how cooperation takes place, how conflicts arise and are resolved, how beliefs about plans determine actions, and how differing beliefs and intentions make a story. The system incorporates ideas from work on simple, or non-interacting plans, but the focus is on plans in a social context.",
"In this far-ranging essay David Olson attempts to reframe current controversies over several aspects of language, including meaning, comprehension, acquisition, reading, and reasoning. Olson argues that in all these cases the conflicts are rooted in differing assumptions about the relation of meaning to language: whether meaning is extrinsic to language—a relation Olson designates as \"utterance\"—or intrinsic— a relation he calls \"text.\" On both the individual and cultural levels there has been development, Olson suggests, from language as utterance to language as text. He traces the history and impact of conventionalized, explicit language from the invention of the Greek alphabet through the rise of the British essayist technique. Olson concludes with a discussion of the resulting conception of language and the implications for the linguistic, psychological, and logical issues raised initially.",
"This dissertation presents the results of research on a planning formalism for a theory of natural language generation that incorporates generation of utterances that satisfy multiple goals. Previous research in the area of computer generation of natural language utterances has concentrated on one of two aspects of language production: (1) the process of producing surface syntactic forms from an underlying representation, and (2) the planning of illocutionary acts to satisfy the speaker's goals. This work concentrates on the interaction between these two aspects of language generation and considers the overall problem to be one of refining the specification of an illocutionary act into a surface syntactic form, emphasizing the problems of achieving multiple goals in a single utterance. \nPlanning utterances requires an ability to do detailed reasoning about what the hearer knows and wants. A formalism, based on a possible worlds semantics of an intensional logic of knowledge and action, was developed for representing the effects of illocutionary acts and the speaker's beliefs about the hearer's knowledge of the world. Techniques are described that enable a planning system to use the representation effectively. \nThe language planning theory and knowledge representation are embodied in a computer system called KAMP (Knowledge And Modalities Planner) which plans both physical and linguistic actions, given a high level description of the speaker's goal. \nThe research has application to the design of gracefully interacting computer systems, multiple-agent planning systems, and planning to acquire knowledge.",
"A warp knitting machine is adapted for the production of looped fabric by providing filler sinkers to supply filler threads extending over the entire width of the machine in parallel relationship. The filler threads arrive consecutively at the needle zone, and basic fabric and loops are produced simultaneously using warp threads and pile yarn, respectively. The filler sinkers perform a substantially rectangular movement ensuring a fail-safe operation with high operating speeds.",
"Abstract : This report describes progress to date in the ability of a computer system to understand and reason about actions. A new method of representing actions within a computer's memory has been developed, and this new representation, called the \"procedural net,\" has been employed in developing new strategies for solving problems and monitoring the execution of the resulting solutions. A set of running computer programs, called the NOAH (Nets Of Action Hierarchies) system, embodies this representation. Its major goal is to provide a framework for storing expertise about the actions of a particular task domain, and to impart that expertise to a human in the cooperative achievement of nontrivial tasks. A problem is presented to NOAH as a statement that is to be made true by applying a sequence of actions in an initial state of the world. The actions are drawn from a set of actions previously defined to the system. NOAH first creates a one-step solution to the problem, then it progressively expands the level of detail of the solution, filling in ever more detailed actions. All the individual actions, composed into plans at differing levels of detail, are stored in the procedural net. The system avoids imposing unnecessary constraints on the order of the actions in a plan. Thus, plans are represented as partial orderings of actions, rather than as linear sequences. The same data structure is used to guide the human user through a task. Since the system has planned the task at varying levels of detail, it can issue requests for action to the user at varying levels of detail, depending on his/her competence and understanding of the higher level actions. If more detail is needed than was originally planned for, or if an unexpected event causes the plan to go awry, the system can continue to plan from any point during execution. In essence, the structure of a plan of actions is as important for problem solving and execution monitoring as the nature of the actions themselves.",
"Abstract : The paper discusses some of the 'belief systems knowledge' used in language understanding. It begins with a presentation of a theory of personal causation. The theory supplies the tools to account for purposeful behavior. Using primitives of the theory the social aspect of an action can be described. The social aspect is that which depends on beliefs and intentions. Patterns of behavior, called 'social action paradigms' (SAP's), are then defined in terms of social actions. The SAP's provide a structure for episodes analogous to the structure a grammar provides for sentences.",
"Part I. A Theory of Speech Acts: 1. Methods and scope 2. Expressions, meaning and speech acts 3. The structure of illocutionary acts 4. Reference as a speech act 5. Predication Part II. Some Applications of the Theory: 6. Three fallacies in contemporary philosophy 7. Problems of reference 8. Deriving 'ought' from 'is' Index.",
"1A7 This paper outlines a preliminary design for a system to understand one-sided arguments. These are a particular kind of conversation, where"
],
"authors": [
{
"name": [
"N. Cocchiarella",
"J. Barwise",
"J. Perry"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"M. Stickel"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"G. Nadathur",
"A. Joshi"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"B. Grosz",
"A. Joshi",
"S. Weinstein"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"D. Appelt"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"W. Mann",
"C. Matthiessen"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"Philip R. Cohen"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"D. Appelt"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"D. Appelt"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"Philip R. Cohen",
"C. Raymond Perrault"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"C. Sidner"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"B. Grosz"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"Bertram C. Bruce",
"Denis Newman"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"D. Olson"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"D. Appelt"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"E. Prince"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"E. Sacerdoti"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"Bertram C. Bruce"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"J. Searle"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"Philip R. Cohen",
"H. Levesque"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
}
],
"arxiv_id": [
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null
],
"s2_corpus_id": [
"124893762",
"18756657",
"10097129",
"10179551",
"1529030",
"57089912",
"13321126",
"33668308",
"436199",
"2166355",
"41092026",
"60968746",
"8569060",
"142824751",
"60491098",
"58335636",
"60729110",
"118337048",
"147355356",
"264403307"
],
"intents": [
[],
[],
[],
[],
[],
[],
[],
[],
[],
[],
[],
[],
[],
[],
[],
[],
[
"background"
],
[],
[],
[]
],
"isInfluential": [
false,
false,
false,
false,
false,
false,
false,
false,
false,
false,
false,
false,
false,
false,
false,
false,
false,
false,
false,
false
]
} | - Problem: The paper addresses the challenge of producing natural-language referring expressions that serve multiple goals in a language-planning system.
- Solution: The paper proposes a formal representation for reasoning about mutual knowledge among agents using possible-worlds semantics, and outlines a system that integrates physical and linguistic actions to plan utterances with concept activation actions and pointing as communicative actions. | 512 | 0.027344 | null | null | null | null | null | null | null | null |
40ec93033ccc7e4eb3bc959ae746dd6f551d4485 | 872672 | null | Linguistic and Computational Semantics | We argue that because the very concept of computation rests on notions of interpretation, the semantics of natural languages and the semantics of computational formalisms are in the deepest sense the same subject. The attempt to use computational formalisms in aid of an explanation of natural language semantics, therefore, is an enterprise that must be undertaken with particular care. We describe a framework for semantical analysis that we have used in the computational realm, and suggest that it may serve to underwrite computadonally-oriented linguistic ser.antics as well. The major feature of this framework is the explicit recognition of both the declarative and the procedural import of meaningful expressions; we argue that whereas these two viewpoints have traditionally been taken as alternative, any comprehensive semantical theory must account for how both aspects of an expression contribute to its overall significance. | {
"name": [
"Smith, Brian Cantwell"
],
"affiliation": [
null
]
} | null | null | 20th Annual Meeting of the Association for Computational Linguistics | 1982-06-01 | 27 | 9 | null | We have argued elsewhere 1 that the distinguishing mark of those objects and processes we call computational has to do with attn'buted semantics." we humans find computational processes coherent exactly because we attach semantical significance to their behaviour, ingredients, and so forth. Put another way, computers, on our view, are those devices that we understand by deploying our linguistic faculties. For example, the reason that a calculator is a computer, but a car is not, is that we take the ingredients of the calculator to be symbolic (standing, in this particular case, for numbers and functions and so forth), and understand the interactions and organisation of the calculator in terms of that interpretation (this part divides, this part represents the sum, and so on). Even though by and large we are able to produce an explanation of the behaviour that does not rest on external semantic attribution (this is the formality condition mentioned by Fodor, Haugeland. and othersz), we nonetheless speak, when we use computational terms, in terms of this semantics. These semantical concepts rest at the foundations of the discipline: the particular organisations that computers have their computational raison d'etre ~ emerge not only from their mechanical structure but also from their semantic interpretability. Similarly, the terms of art employed in computer science --program, compiler, implementation, interpreter, and so forth --will ultimately he definable only with reference to this attributed semantics; they will not, on our view, ever be found reducible to non-semantical predicates?This is a ramifying and problematic position, which we cannot defend here. 4 We may simply note, however, the overwhelming evidence in favour of a semantical approach manifested by everyday computational language. Even the simple view of computer science as the study of symbol manipulation s reveals this bias. Equally telling is the fact that programming languages are called languages. In addition, language-derived concepts like name and reference and semantics permeate computational jargon (to say nothing of interpreter, value, variable, memory, expression, identifier and so on) --a fact that would be hard to explain if semantics were not crucially involved. It is not just that in discussing computation we use language; rather, in discussing computation we use words that suggest that we are also talking about linguistic phenomena.The question we will focus on in this paper, very briefly, is this: if computational artefacts are fundamentally linguistic, and if, therefore, it is appropriate to analyse them in terms of formal theories of semantics (it is apparent that this is a widely held view), then what is the proper relationship between the so-called computational semantics that results, and more standard linguistic semantics (the discipline that studies people and their natural languages: how we mean, and what we are talking about, and all of. that good stuff)? And furthermore, what is it to use computational models to explain natural language semantics, if the computational models are themselves in need of semantical analysis? On the face of it, there would seem to be a certain complexity that should he sorted out.In answering these questions we will argue approximately as follows: in the limit computational semantics and linguistic semantics will coincide, at least in underlying conception, if not in surface detail (for example some issues, like ambiguity, may arise in one case and not in the other). Unfortunately, however, as presently used in computer science the term "semantics" is given such an operational cast that it distracts attention from the human attribution of significance to computational structures. 6 In contrast, the most successful models of natural language semantics, embodied for example in standard model theories and even in Montague's program, have concentrated almost exclusively on referential or denotational aspects of declarative sentences.Judging only by surface use, in other words, computational semantics and linguistic semantics appear almost orthogonal in concern, even though they are of course similar in so'le (for example they both use meta-theoretic mathematical techniques --functional composition, and so forth -to recursively specify the semantics of complex expressions from a given set of primitive atoms and formation rules). It is striking, however, to observe two facts. First, computational semantics is being pushed (by people and by need) more and more towards declarative or referential issues. Second, natural language semantics, particularly in computationally-based studies, is focusing more and more on pragmatic questions of use and psychological import. Since computational linguistics operates under the computational hypothesis of mind, psychological issues are assumed to be modelled by a field of computational structures and the state of a processor running over them; thus these linguistic concerns with "use" connect naturally with the "operational" flavour of standard programming language semantics. It seems not implausible, therefore --we betray our caution with the double negative --that a unifying framework might be developed.It will be the intent of this paper to present a specific, if preliminary, proposal for such a framework. First, however, some introductory comments. In a general sense of the term, semantics can be taken as the study of the relationship between entities or phenomena in a syntactic domain s and corresponding entities in a semantic domain t). as pictured in the following diagram.We call the function mapping dements from the first domain into elements of the second an interpretation function (to be sharply distinguished 7 from what in computer science is called an interpreter, which is a different beast altogether). Note that the question of whether an element is syntactic or semantic is a function of the point of view; the syntactic domain for one interpretation function can readily be the semantic domain of another (and a semantic domain may of course include its own syntactic domain).Not all relationships, of course, count as semantical; the "grandmother" relationship fits into the picture just sketched, but stakes no claim on being semantical. Though it has often been discussed what constraints on such a relationship characterise genuinely semantical ones (compositionality or recursive specifiability, and a certain kind of formal character to the syntactic domain, are among those typically mentioned), we will not pursue such questions here. Rather, we will complicate our diagram as follows, so as to enable us to characterise a rather large class of computational and linguistic formalisms: order logic, sl and s2 would be something like abstract derivation tree types of first-order formulae; if the diagram were applied to the human mind, under the hypothesis of a formally encoded mentalese, s~ and s2 would be tokens of internal mentalese, and e would be the function computed by the "linguistic" faculty (on a view such as that of Fodora). In adopting these terms we mean to be speaking very generally; thus we mean to avoid, for example, any claim that tokens of English are internalised (a term we will use for o) into recognisable tokens of mentalese. In particular, the proper account of e for humans could well simply describe how the field of mentalese structures, in some configuration, is transformed into some other configuration, upon being presented with a particular English sentence; this would still count, on our view, as a theory of o.[ )¢otation )¢l ] ] )~otation ~2 ]In contrast, ~ is the interpretation function that makes explicit the standard denotational significance of linguistic terms, relating, we may presume, expressions in $ to the world of discourse. The relationship between my mental token for T. S. Eliot, for example, and the poet himself, would he formulated as pan of ~. Again, we speak very broadly; ¢ is intended to manifest what, paradigmatically, expressions are about, however that might best be formulated (,1, includes for example the interpretation functions of standard model theories), q,, in contrast, relates some internal structures or states to others --one can imagine it specifically as the formally computed derivability relationship in a logic, as the function computed by the primitive language processor in a computational machine (i.e., as tzsP'S EVAL), or more generally as the function that relates one configuration of a field of symbols to another, in terms of the modifications engendered by some internal processor computing over those states. (~ and q, are named, for mnemonic convenience, by analogy with philosophy and psychology, since a study of • is a study of the relationship between expressions and the world --since philosophy takes you "out of your mind", so to speak --whereas a study of ~v is a study of the internal relationships between symbols. all of which, in contrast, are "within the head" of the person or machine.) Some simple comments. First` N~, N2, Sl, S~, o~, and oz need not all necessarily be distinct: in a case where sl is a self-referential designator, for example, D~ would he the same as s~; similarly, in a case where ~, computed a function that was designation-preserving, then D~ and o 7 would be identical. Secondly, we need not take a stand on which of x~ and • has a prior claim to being the semantics of sl. In standard logic, q, (i.e., derivability: }-) is a relationship, hut is far from a function, and there is little tendency to think of it as semantical; a study of ,I, is called proof theory. In computational systems, on the other hand, q, is typically much more constrained, and is also, by and large, analysed mathematically in terms of functions and so forth, in a manner much more like standard model theories. Although in this author's view it seems a little far-fetched to call the internal relationships (the "use" of a symbol) semantical, it is nonetheless true that we are interested in characterising both, and it is unnecesary to express a preference. For discussion, we will refer to .he ",-semantics of a symbol or expression as its declarative /mp0rt, and refer to its *-semantics as its procedural consequence.We have heard it said in other quarters that "procedural" and "declarative" theories of semantics are contenders; 9 to the extent that we have been able to make sense of these notions, it appears that we need both.It is possible to use this diagram to characterise a variety of standard formal systems. In the standard models of the k-calculus, for example, the designation function ~, takes h-expressions onto functions; the procedural regimen % usually consisting of =-and/lreductions, can be shown to be ~,-preserving. Similarly, if in a standard predicate logic we take • to be (the inverse of the) satisfaction relationship, with each element of S being a sentence or set of sentences, and elements of o being those possible worlds in which those sentences are true, and similarly take ,I, as the derivability relationship, then soundness and completeness can he expressed as the equation 'l'(sl,s2) m [ o~ C_ D~ ]. As for all formal systems (these presumably subsume the computational ones), it is crucial that ,t, he specifiable independent of ,l,. The h-calculus and predicate logic systems, furthermore, have no notion of a processor with state; thus the appropriate • involves what we may call local procedural conse.quence, relating a simple symbol or set of symbols to another set. In a more complex computational circumstance, as we will see below, it is appropriate to characterise a more complex f~rll procedural consequence involving not only simple expressions, but fuller encodings of the state of various aspects of the computational machine (for example, at least environments and continuations in the typical computational easel0).An important consequence of the analysis illustrated in the last figure is that it enables one to ask a question not typically asked it" computer science, about the (q,-) semantic character of the function computed by ~,. Note that questions about soundness and completeness in logic are exactly questions of this type. In separate research, 11 we have shown, by subjecting it to this kind of analysis, tJ~at computational formalisms can be usefully analysed in these terms as well. In particular, we demonstrated that the universally a:cepted LISP evaluation protocol is semantically Confused, in the fbllowing sense: sometimes it preserves • (i.e. ~(,I,(S)) = ~,(s)), and sometimes it embodies • (i.e., ,l,(s) = ,l,(s)). The traditional LISP notion of evaluation, in other words, conflates simplification and reference relationships, to its peril (in that report we propose some LISP dialects in which these two are kept strictly separate). The current moral, however, is merely that our approach allows the question of the semantical import of ,~ to be asked.As well as considering LISP. we may use our diagram to c~laracterise the various linguistically oriented projects carried on under the banner of "semantics". Model theories and formal theories of language (we include Tarski and Montague in one sweep) have concentrated primarily on ~,. Natural language semantics in some quarters 12 focuses on o ~ on the translation into an internal medium ~ although the question of what aspects of a given sentence must be preserved in such a translation are of course of concern (no translator could ignore the salient properties, semantical and otherwise, of the target language, be it mentalese or predicate logic, since the endeavour would otherwise be without constraint). l.ewis (for one) has argued that the project of articulating O ~ an ¢ndeavour he calls markerese semantics --cannot really be called semantics at all, 13 since it is essentially a translation relationship, zlthough it is worth noting that e in computational formalisms is not z.lways trivial, and a case can at least be made that many superficial aspects of natural language use, such as the resolution of indexicals, raay be resolved at this stage (if for example you say I am warm then I may internalise your use of the first person pronoun into my iaternal name for you).Those artificial intelligence researchers working in knowledge representation, perhaps without too much distortion, can be divided into two groups: a) those whose primary semantical allegiance is to ~, and who (perhaps as a consequence) typically use an encoding of first-order logic as.their representation language, and b) those who concern themselves primarily with ,~, and who therefore (legitimately enough) reject logic as even suggestive (* in logic --derivability is a relatively unconstrained relationship, for one thing; secondly, the relationship between the entailment relationship, to which derivability is a hopeful approximation, and the proper "~," of rational belief revision, is at least a matter of debatel4).Programming language semantics, for reasons that can at least be explored, if not wholly explained, have focused primarily on q,, although in ways that tend to confuse it with ~. Except for PROLOG, which borrows its • straight from a subset of first-order logic, and the LIsPs mentioned earlier, is we have never seen a semantical account of a programming language that gave independent accounts of • and ,1,. There are complexities, furthermore, in knowing just what the proper treatment of general languages should be. In a separate paper 16 we argue that the notion program is inherently defined as a set of expressions whose (~-) semantic domain includes data structures (and set-theoretic entities built up over them). In other words, in a computational process that deals with finance, say, the general data structures will likely designate individuals and money and relationships among them, but the terms in that pan of the process called a program will not designate these people and their money, but will instead designa:~' the data ztructures that designate people and money (plus of course relationships and functions over those data structures). Even on a declarative view like ours, in other words, the appropriate semantic domain for programs is built up over data structures --a situation strikingly like the standard semantical accounts that take abstract records or locations or whatever as elements of the otherwise mathematical domain for programming language semantics. It may be that this fact that all base terms in programs are meta-syntactic that has spawned the confusion between operations and reference in the computational setting.Although the details of a general story remain to be worked out, the LiSP case mentioned earlier is instructive, by way of suggestion as to how a more complete computational theory of language semantics might go. In particular, because of the context relativity and non-local effects that can emerge from processing a LISP expression, ~, is not specifiable in a strict compositional way. ,~ --when taken to include the broadest possible notion that maps entire configurations of the field of symbols and of the processor itself onto other configurations and states --is of course recursively specifiable (the same tact, in essence, as saying that LISP is a deterministic formal calculus). A pure characterlsation of ,I, without a concomitant account of $, however, is unmotivated --as empty as a specification of a derivability relationship would be for a calculus for which no semantics had been given. Of more interest is the ability to specify what we call a general significance .function 2, that recursively specifies ,I, and ,~ together (this is what we were able to do for LZSP). In particular, given any expression s~, any configuration of the rest of the symbols, and any state of the processor, the function z will specify the configuration and state that would result (i.e.. it will specify the use of sx), and also the relationship to the world that the whole signifies. For example,given a LISP expression of the form (+ z (PROG (SETQ A 2) A)), ~g would specify that the whole expression designated the number three, that it would return the numeral "3", and that the machine would be left in a state in which the binding of the variable A was changed to the numeral "z". A modest result; what is important is merely a) that both declarative import and procedural significance must be reconstructed in order to tell .a full story about LISP; and b) that they must be formulated together.Rather than pursue this view in detail, it is helpful to set out several points that emerge from analyses developed within this framework:a. In most programming languages, o can be specified compositionally and independently of 4, or * --this amounts to a formal statement of Fodor's modularity thc~m for language, z7 In the ease of formal systems, O is often context free and compositional, but not always (reader macros can render it opaque, or at least intensional, and some languages such as ALGOL ale apparently context-sensitive). It is noteworthy, however, that there have been computational languages for which e could not be specified indepently of * a fact that is often stated as the fact that the programming language "cannot be parsed except at runtime" (TEC0 and the first versions of SHALLTALK had this character).b. Since LISP is computational, it follows that a full account of its * can be specified independent of 4,; this is in essence the formality condition. It is important to bring out, however, that a local version of * will typically not be compositional in a modem computational formalism, even though such locality holds in purely extensional context-free side-effect free languages such as the h-calculus.c. It is widely agreed that * does not uniquely determine ,I, (this is the "psychology narrowly construed" and the concomitant methodological solipsism of Putnam and Fodor and othemlS). However this fact is compatible with our foundational claim that computational systems are distinguished in virtue of having some version of 4, as part of their characterisation. A very similar point can be made for logic: although any given logic can (presumably) be given a mathematically-specified model theory, that theory doesn't typically tie down what is often called the standard model or interpretation --the interpretation that we use. This fact does not release us, however, from positing as a candidate logic only a formalism that humans can interpret.d. The declarative interpretation 4, cannot be wholly determined independent of *, except in purely declarative languages (such as the x-calculus and logic and so forth). This is to say that without some account of the effect on the processor of one fragment of a whole linguistic structure, it may be impossible to say what that processor will take another fragment as designating. The use of StTQ in LISP is an example; natural language instances will be explored, below.This last point needs a word of explanation. It is of course possible to specify 4, in mathematical terms without any explicit mention of a • -like function; the approach we use in LISP defines both. and in terms of the overarching function • mentioned above, and we could of course simply define 4, without defining . at all. Our i~oint, rather, is that any successful definition of ~, will effectively have to do the work of *, more or less explicidy, either by defining some identifiable relationship, or else by embedding that relationship within the recta-theoretic machinery. We are arguing, in other words, only that the subject we intend * to cover must be treated in some fashion or other.What is perhaps surprising about aII of this machinery is that it must be brought to bear on a purely procedural language --all three relationships (O, 4,, and .) figure crucially in an account even of LISP. we are not suggesting that LzsP is like natural languages: to point out just one crucial difference, there is no way in LISP or in any other programming language (except PROLOG) tO say anything, whereas the ability to say things is clearly a foundational aspect of any human language. The problem in the procedural languages is one of what we may call assertional force; although it is possible to construct a sentence-like expression with a clear declarative semantics (such as some equivalent of "x • 3"), one cannot use it in such a way as to actually mean it --so as to have it carry any assertional weight. For example, it is trivial to set some variable x to a, or to ask whether x is 3, but there is no way to state that x is 3, It should be admitted, however, that computational languages bearing assertional force are under considerable current investigation. This general interest is probably one of the reasons for PaOLOG'S emergent popularity; other computational systems with an explicit declarative character include for example specification languages, data base models, constraint languages, and knowledge representation languages in A.I.We can only assume that the appropriate semantics for all of these formalisms will align even more closely with an illuminating semantics for natural language.What does all of this have to do with natural language, and with computational linguistics? The essential point is this: tf this characterisation of formal systems is tenable, and if the techniques of standard programming language semantics can be fit into this mould, then it may be possible to combine those approaches with the techniques of programming language semantics and of logic and model theories, to construct complex and interacting accounts of * and of 4,. To take just one example, the techniques that are used to construct mathematical accounts of environments and continuations might be brought to bear on the issue of dealing with the complex circumstances involving discourse models, theories of focus in dealing with anaphora, and so on; both cases involve an attempt to construct a recursively specifiable account of non-local interactions among disparate fragments of a composite text.But the contributions can proceed in the other direction as well: even from a very simple application of this framework to this circumstance of LISP, for example, we have been able to show how an accepted computational notion fails to cohere with our attributed linguistically based understanding, involving us in a major reconstruction of LZSP'S foundations. The similarities are striking.Our claim, in sum, is that similar phenomena occur in programming languages and natural languages, and that each discipline could benefit from the semantical techniques developed in the other. Some examples of these similar phenomena will help to motivate this view. The first is the issue ~ t,,. appropriate use of noun phrases: as well as employing a noun phrase in a standard e .~,lnmnal position, natural language semantics has concerned itself with more difficult cases such as intensional contexts (as in the underlined designator in I didn't know The Big Apple was an island. where the co-designating term New York cannot be substituted without changing the meaning), the so-called attributive~referential distinction of Donellan z9 (the difference, roughly, between using a noun phrase like "the man with a martini" to inform you that someone is drinking a martini, as Opposed to a situation where one uses the heater's belief or assumption that someone is drinking a martini to refer to him), and so on. Another example different from either of these is provided by the underlined term in For the next 20 years let's re~trict the president's salary to $20,000, on the reading in which after Reagan is defeated he is allowed to earn as much as he pleases, but his successor comes under our constraint. The analagous computational cases include for example the use of an expression like (the formal analog of) make the sixth array element be 10 (i.e., A(B) ::~ 10). where we mean not that the current sixth element should be 10 (the current sixth array element might at the moment tie 9, and 9 can't be 10), but rather that we would like the description "the sixth array element" to refer to 10 ~so-called "Lvalues", analogous to HACI.ISP'S serf construct). Or, to take a ,:lifferent case, suppose we say set x to the sixth array element (i.e., x :: = A(B)), where we mean not that x should be set to the current sixth array element, but that it should always be equal to that element (stated computationaUy this might be phrased as saying that :~ should track a(6); stated linguistically we might say that X should mean "the sixth array element"). Although this is not a standard type of assignment, the new constraint languages provide exactly such facilities, and macros (classic computational intensional operators) can be used in more traditional languages for such purposes, Or, for a final example, consider the standard dec~ation: z~r[GeA x, in which the term "x" refers neither to the variable itself (variables are variables, not numbers), nor to its current designation, but rather to whatever will satisfy the description "the value of x" at any point in the course of a computation. All in all, we cannot ignore the attempt on the computationalists' part to provide complex mechanisms so strikingly similar to the complex ways we use noun phrases in English.A very different sort of lingusitic phenomenon that occurs in both programming languages and in natural language are what we might call "premature exits": cases where the processing of a local fragment aborts the standard interpretation of an encompassing discourse. If for example I say to you I was walking down the street that leads to the house that Mary's aunt used to ... forget it; [ was taking a walk, then the "forget it" must be used to discard the analysis of some amount of the previous sentence. The grammatical structure of the subsequent phrase determines how much has been discarded, of course; the sentence would still be comprehensible if the phrase "an old house I like" followed the "forget it". We are not accustomed to semantical theories that deal with phenomena like this, of course, but it is clear that any serious attempt to model real language understanding will have to face them. Our present point is merely that continuations z° enable computational formalisms to deal exactly with the computational analogs of this: so-called escape operators like I, IACLISP'S THROW and CATCH and QUIT.In addition, a full semantics of language will want to deal with such sentences as If by "flustrated" you mean what I think, then she was certainly fluslrated. The proper treatment of the first clause in this sentence will presumably involve lots of ",t," sorts of considerations: its contribution to the rcmainder of the sentence has more to do with the mental states of speaker and hearer than with the world being describe by the presumed conversation. Once again, the overarching computational hypothesis suggests that the way these psychological effects must be modelled is in terms of alterations in :he state of an internal process running over a field of computational structures.As well as these specific examples, a couple of more general morals can be drawn, important in that they speak directly to styles of practice that we see in the literature. The first concerns the suggestion, apparently of some currency, that we reject the notion of logical form, and "do semantics directly" in a computational model On our account this is a mistake, pure and simple: to buy into the computational framework is to believe that the ingredients in any computational process are inherently linguistic, in need of interpretation. Thus they too will need semantics; the internalisation of English into a computer (O) is a translation relationship (in the sense of preserving ~, presumably) --even if it is wildly contextual, and even if the internal language is very different in structure from the st.rucmre of English.It has sometimes been informally suggested, in an analogous vein, that Montague semantics cannot be taken seriously computationally, because the models that Montague proposes are "too big" --how could you possibly carry these infinite functions around in your head, we are asked to wonder. But of course this argument comits a use/mention mistake: the only valid computational reading of Montague would mean that mentalse (,~) would consist of designators of the functions Montague propose~ and those designators can of course be a few short formulae, It is another consequence of our view that any semanticist who proposes some kind of "mental structure" in his or her account of language is commited to providing an interpretation of that structure. Consider for example a proposal that posits a notion of "focus" for a discourse fragment. Such a focus might be viewed as a (possibly abstracO entity in the world, or as a element of computational structure playing such-and-such role in the behavioural model of language understanding. It might seem that these are alternative accounts: what our view insists is that an interpretation of the latter must give it a designation (e~); thus there would be a computational structure (being biased, we will call it the focus-designator), and a designation (that we call the focus.itsel]). The complete account of focus would have to specify both of these (either directly, or else by relying on the generic declarative semantics to mediate between them), and also tell a story about how the focus-designator plays a causal role (,I,) in engendering the proper behaviour in the computational model of language understanding.There is one final problem to be considered: what it is to design an internal folvnatism S (the task, we may presume, of anyone designing a knowledge representation language). Since, on our view, we must have a semantics, we have the option either of having the semantics informally described (or, even worse, tacitly assumed), or else we can present an explicit account, either by defining such a story ourselves or by borrowing from someone else. If the LIsp case can be taken as suggestive, a purely declarative model theory will be inadequate to handle the sorts of comptuational interactions that programming languages have required (and there is no a priori reason to assume that successful computational models for natural language will be found that are simpler than the programming languages the community has found necessary for the modest sons of tasks computers are presently able to perform). However it is also reasonable to expect that no direct analog to programming language semantics will suffice, since they have to date been so concerned with purely procedural (behavioural) consequence. It seems at least reasonable to suppose that a general interpretation function, of the z sort mentioned earlier, may be required.Consider for example the ZLONE language presented by Brachman et aL 21 Although no semantics for KLONE has been presented, either procedural or declarative, its proponents have worked both in investigating the o-sehaantics (how to translate English into KLONE), and in developing an informal account of the procedural aspects. Curiously, recent directions in that project would suggest that its authors expect to be able to provide a "declarativeonly" account of KLONE semantics (i.e., expect to be able to present an account of ~, independent of ~,), in spite of our foregoing remarks. Our only comment is to remark that independence of procedural consequence is not a pre-requisite to an adequate semantics; the two can be recursively specifiable together; thus this apparent position is stronger than formally necessary ~ which makes it perhaps of considerable interest.In sum, we claim that any semantical account of either natural language or computational language must specify O, ,I,, and ,~; if any are leR out, the account is not complete. We deny, furthermore, that there is any fundamental distinction to be drawn between so-called procedural languages (of which LISP is the paradigmatic example in A.I.) and other more declarative languages (encodings of logic, or representation languages). We deny as well, contrary to at least some popular belief, the view that a mathcmatically well-specified semantics for a candidate "mcntalese" must bc satisfied by giving an independently specified declarative semantics (as would be possible for an encoding of logic, for example). The designers of zat, zz for example, for principled reasons denied the possibility of giving a semantics indcpendent of the procedures in which the Kat structures participated; our simple account of LISP has at least suggested that such an approach could be pursued on a mathematically sound footing. Note however, in spite of our endorsement of what might be called a procedural semantics, that this in no way frees one from from giving a declarative semantics as well; procedural semantics and declarative semantics are two pieces of a total story; they are not alternatives.* I am grateful to Barbara Grosz and Hector Levesque for their comments on an earlier draft of this short paper, and to Jane Robinson for her original suggestion that it be written. | null | null | null | null | Main paper:
:
We have argued elsewhere 1 that the distinguishing mark of those objects and processes we call computational has to do with attn'buted semantics." we humans find computational processes coherent exactly because we attach semantical significance to their behaviour, ingredients, and so forth. Put another way, computers, on our view, are those devices that we understand by deploying our linguistic faculties. For example, the reason that a calculator is a computer, but a car is not, is that we take the ingredients of the calculator to be symbolic (standing, in this particular case, for numbers and functions and so forth), and understand the interactions and organisation of the calculator in terms of that interpretation (this part divides, this part represents the sum, and so on). Even though by and large we are able to produce an explanation of the behaviour that does not rest on external semantic attribution (this is the formality condition mentioned by Fodor, Haugeland. and othersz), we nonetheless speak, when we use computational terms, in terms of this semantics. These semantical concepts rest at the foundations of the discipline: the particular organisations that computers have their computational raison d'etre ~ emerge not only from their mechanical structure but also from their semantic interpretability. Similarly, the terms of art employed in computer science --program, compiler, implementation, interpreter, and so forth --will ultimately he definable only with reference to this attributed semantics; they will not, on our view, ever be found reducible to non-semantical predicates?This is a ramifying and problematic position, which we cannot defend here. 4 We may simply note, however, the overwhelming evidence in favour of a semantical approach manifested by everyday computational language. Even the simple view of computer science as the study of symbol manipulation s reveals this bias. Equally telling is the fact that programming languages are called languages. In addition, language-derived concepts like name and reference and semantics permeate computational jargon (to say nothing of interpreter, value, variable, memory, expression, identifier and so on) --a fact that would be hard to explain if semantics were not crucially involved. It is not just that in discussing computation we use language; rather, in discussing computation we use words that suggest that we are also talking about linguistic phenomena.The question we will focus on in this paper, very briefly, is this: if computational artefacts are fundamentally linguistic, and if, therefore, it is appropriate to analyse them in terms of formal theories of semantics (it is apparent that this is a widely held view), then what is the proper relationship between the so-called computational semantics that results, and more standard linguistic semantics (the discipline that studies people and their natural languages: how we mean, and what we are talking about, and all of. that good stuff)? And furthermore, what is it to use computational models to explain natural language semantics, if the computational models are themselves in need of semantical analysis? On the face of it, there would seem to be a certain complexity that should he sorted out.In answering these questions we will argue approximately as follows: in the limit computational semantics and linguistic semantics will coincide, at least in underlying conception, if not in surface detail (for example some issues, like ambiguity, may arise in one case and not in the other). Unfortunately, however, as presently used in computer science the term "semantics" is given such an operational cast that it distracts attention from the human attribution of significance to computational structures. 6 In contrast, the most successful models of natural language semantics, embodied for example in standard model theories and even in Montague's program, have concentrated almost exclusively on referential or denotational aspects of declarative sentences.Judging only by surface use, in other words, computational semantics and linguistic semantics appear almost orthogonal in concern, even though they are of course similar in so'le (for example they both use meta-theoretic mathematical techniques --functional composition, and so forth -to recursively specify the semantics of complex expressions from a given set of primitive atoms and formation rules). It is striking, however, to observe two facts. First, computational semantics is being pushed (by people and by need) more and more towards declarative or referential issues. Second, natural language semantics, particularly in computationally-based studies, is focusing more and more on pragmatic questions of use and psychological import. Since computational linguistics operates under the computational hypothesis of mind, psychological issues are assumed to be modelled by a field of computational structures and the state of a processor running over them; thus these linguistic concerns with "use" connect naturally with the "operational" flavour of standard programming language semantics. It seems not implausible, therefore --we betray our caution with the double negative --that a unifying framework might be developed.It will be the intent of this paper to present a specific, if preliminary, proposal for such a framework. First, however, some introductory comments. In a general sense of the term, semantics can be taken as the study of the relationship between entities or phenomena in a syntactic domain s and corresponding entities in a semantic domain t). as pictured in the following diagram.We call the function mapping dements from the first domain into elements of the second an interpretation function (to be sharply distinguished 7 from what in computer science is called an interpreter, which is a different beast altogether). Note that the question of whether an element is syntactic or semantic is a function of the point of view; the syntactic domain for one interpretation function can readily be the semantic domain of another (and a semantic domain may of course include its own syntactic domain).Not all relationships, of course, count as semantical; the "grandmother" relationship fits into the picture just sketched, but stakes no claim on being semantical. Though it has often been discussed what constraints on such a relationship characterise genuinely semantical ones (compositionality or recursive specifiability, and a certain kind of formal character to the syntactic domain, are among those typically mentioned), we will not pursue such questions here. Rather, we will complicate our diagram as follows, so as to enable us to characterise a rather large class of computational and linguistic formalisms: order logic, sl and s2 would be something like abstract derivation tree types of first-order formulae; if the diagram were applied to the human mind, under the hypothesis of a formally encoded mentalese, s~ and s2 would be tokens of internal mentalese, and e would be the function computed by the "linguistic" faculty (on a view such as that of Fodora). In adopting these terms we mean to be speaking very generally; thus we mean to avoid, for example, any claim that tokens of English are internalised (a term we will use for o) into recognisable tokens of mentalese. In particular, the proper account of e for humans could well simply describe how the field of mentalese structures, in some configuration, is transformed into some other configuration, upon being presented with a particular English sentence; this would still count, on our view, as a theory of o.[ )¢otation )¢l ] ] )~otation ~2 ]In contrast, ~ is the interpretation function that makes explicit the standard denotational significance of linguistic terms, relating, we may presume, expressions in $ to the world of discourse. The relationship between my mental token for T. S. Eliot, for example, and the poet himself, would he formulated as pan of ~. Again, we speak very broadly; ¢ is intended to manifest what, paradigmatically, expressions are about, however that might best be formulated (,1, includes for example the interpretation functions of standard model theories), q,, in contrast, relates some internal structures or states to others --one can imagine it specifically as the formally computed derivability relationship in a logic, as the function computed by the primitive language processor in a computational machine (i.e., as tzsP'S EVAL), or more generally as the function that relates one configuration of a field of symbols to another, in terms of the modifications engendered by some internal processor computing over those states. (~ and q, are named, for mnemonic convenience, by analogy with philosophy and psychology, since a study of • is a study of the relationship between expressions and the world --since philosophy takes you "out of your mind", so to speak --whereas a study of ~v is a study of the internal relationships between symbols. all of which, in contrast, are "within the head" of the person or machine.) Some simple comments. First` N~, N2, Sl, S~, o~, and oz need not all necessarily be distinct: in a case where sl is a self-referential designator, for example, D~ would he the same as s~; similarly, in a case where ~, computed a function that was designation-preserving, then D~ and o 7 would be identical. Secondly, we need not take a stand on which of x~ and • has a prior claim to being the semantics of sl. In standard logic, q, (i.e., derivability: }-) is a relationship, hut is far from a function, and there is little tendency to think of it as semantical; a study of ,I, is called proof theory. In computational systems, on the other hand, q, is typically much more constrained, and is also, by and large, analysed mathematically in terms of functions and so forth, in a manner much more like standard model theories. Although in this author's view it seems a little far-fetched to call the internal relationships (the "use" of a symbol) semantical, it is nonetheless true that we are interested in characterising both, and it is unnecesary to express a preference. For discussion, we will refer to .he ",-semantics of a symbol or expression as its declarative /mp0rt, and refer to its *-semantics as its procedural consequence.We have heard it said in other quarters that "procedural" and "declarative" theories of semantics are contenders; 9 to the extent that we have been able to make sense of these notions, it appears that we need both.It is possible to use this diagram to characterise a variety of standard formal systems. In the standard models of the k-calculus, for example, the designation function ~, takes h-expressions onto functions; the procedural regimen % usually consisting of =-and/lreductions, can be shown to be ~,-preserving. Similarly, if in a standard predicate logic we take • to be (the inverse of the) satisfaction relationship, with each element of S being a sentence or set of sentences, and elements of o being those possible worlds in which those sentences are true, and similarly take ,I, as the derivability relationship, then soundness and completeness can he expressed as the equation 'l'(sl,s2) m [ o~ C_ D~ ]. As for all formal systems (these presumably subsume the computational ones), it is crucial that ,t, he specifiable independent of ,l,. The h-calculus and predicate logic systems, furthermore, have no notion of a processor with state; thus the appropriate • involves what we may call local procedural conse.quence, relating a simple symbol or set of symbols to another set. In a more complex computational circumstance, as we will see below, it is appropriate to characterise a more complex f~rll procedural consequence involving not only simple expressions, but fuller encodings of the state of various aspects of the computational machine (for example, at least environments and continuations in the typical computational easel0).An important consequence of the analysis illustrated in the last figure is that it enables one to ask a question not typically asked it" computer science, about the (q,-) semantic character of the function computed by ~,. Note that questions about soundness and completeness in logic are exactly questions of this type. In separate research, 11 we have shown, by subjecting it to this kind of analysis, tJ~at computational formalisms can be usefully analysed in these terms as well. In particular, we demonstrated that the universally a:cepted LISP evaluation protocol is semantically Confused, in the fbllowing sense: sometimes it preserves • (i.e. ~(,I,(S)) = ~,(s)), and sometimes it embodies • (i.e., ,l,(s) = ,l,(s)). The traditional LISP notion of evaluation, in other words, conflates simplification and reference relationships, to its peril (in that report we propose some LISP dialects in which these two are kept strictly separate). The current moral, however, is merely that our approach allows the question of the semantical import of ,~ to be asked.As well as considering LISP. we may use our diagram to c~laracterise the various linguistically oriented projects carried on under the banner of "semantics". Model theories and formal theories of language (we include Tarski and Montague in one sweep) have concentrated primarily on ~,. Natural language semantics in some quarters 12 focuses on o ~ on the translation into an internal medium ~ although the question of what aspects of a given sentence must be preserved in such a translation are of course of concern (no translator could ignore the salient properties, semantical and otherwise, of the target language, be it mentalese or predicate logic, since the endeavour would otherwise be without constraint). l.ewis (for one) has argued that the project of articulating O ~ an ¢ndeavour he calls markerese semantics --cannot really be called semantics at all, 13 since it is essentially a translation relationship, zlthough it is worth noting that e in computational formalisms is not z.lways trivial, and a case can at least be made that many superficial aspects of natural language use, such as the resolution of indexicals, raay be resolved at this stage (if for example you say I am warm then I may internalise your use of the first person pronoun into my iaternal name for you).Those artificial intelligence researchers working in knowledge representation, perhaps without too much distortion, can be divided into two groups: a) those whose primary semantical allegiance is to ~, and who (perhaps as a consequence) typically use an encoding of first-order logic as.their representation language, and b) those who concern themselves primarily with ,~, and who therefore (legitimately enough) reject logic as even suggestive (* in logic --derivability is a relatively unconstrained relationship, for one thing; secondly, the relationship between the entailment relationship, to which derivability is a hopeful approximation, and the proper "~," of rational belief revision, is at least a matter of debatel4).Programming language semantics, for reasons that can at least be explored, if not wholly explained, have focused primarily on q,, although in ways that tend to confuse it with ~. Except for PROLOG, which borrows its • straight from a subset of first-order logic, and the LIsPs mentioned earlier, is we have never seen a semantical account of a programming language that gave independent accounts of • and ,1,. There are complexities, furthermore, in knowing just what the proper treatment of general languages should be. In a separate paper 16 we argue that the notion program is inherently defined as a set of expressions whose (~-) semantic domain includes data structures (and set-theoretic entities built up over them). In other words, in a computational process that deals with finance, say, the general data structures will likely designate individuals and money and relationships among them, but the terms in that pan of the process called a program will not designate these people and their money, but will instead designa:~' the data ztructures that designate people and money (plus of course relationships and functions over those data structures). Even on a declarative view like ours, in other words, the appropriate semantic domain for programs is built up over data structures --a situation strikingly like the standard semantical accounts that take abstract records or locations or whatever as elements of the otherwise mathematical domain for programming language semantics. It may be that this fact that all base terms in programs are meta-syntactic that has spawned the confusion between operations and reference in the computational setting.Although the details of a general story remain to be worked out, the LiSP case mentioned earlier is instructive, by way of suggestion as to how a more complete computational theory of language semantics might go. In particular, because of the context relativity and non-local effects that can emerge from processing a LISP expression, ~, is not specifiable in a strict compositional way. ,~ --when taken to include the broadest possible notion that maps entire configurations of the field of symbols and of the processor itself onto other configurations and states --is of course recursively specifiable (the same tact, in essence, as saying that LISP is a deterministic formal calculus). A pure characterlsation of ,I, without a concomitant account of $, however, is unmotivated --as empty as a specification of a derivability relationship would be for a calculus for which no semantics had been given. Of more interest is the ability to specify what we call a general significance .function 2, that recursively specifies ,I, and ,~ together (this is what we were able to do for LZSP). In particular, given any expression s~, any configuration of the rest of the symbols, and any state of the processor, the function z will specify the configuration and state that would result (i.e.. it will specify the use of sx), and also the relationship to the world that the whole signifies. For example,given a LISP expression of the form (+ z (PROG (SETQ A 2) A)), ~g would specify that the whole expression designated the number three, that it would return the numeral "3", and that the machine would be left in a state in which the binding of the variable A was changed to the numeral "z". A modest result; what is important is merely a) that both declarative import and procedural significance must be reconstructed in order to tell .a full story about LISP; and b) that they must be formulated together.Rather than pursue this view in detail, it is helpful to set out several points that emerge from analyses developed within this framework:a. In most programming languages, o can be specified compositionally and independently of 4, or * --this amounts to a formal statement of Fodor's modularity thc~m for language, z7 In the ease of formal systems, O is often context free and compositional, but not always (reader macros can render it opaque, or at least intensional, and some languages such as ALGOL ale apparently context-sensitive). It is noteworthy, however, that there have been computational languages for which e could not be specified indepently of * a fact that is often stated as the fact that the programming language "cannot be parsed except at runtime" (TEC0 and the first versions of SHALLTALK had this character).b. Since LISP is computational, it follows that a full account of its * can be specified independent of 4,; this is in essence the formality condition. It is important to bring out, however, that a local version of * will typically not be compositional in a modem computational formalism, even though such locality holds in purely extensional context-free side-effect free languages such as the h-calculus.c. It is widely agreed that * does not uniquely determine ,I, (this is the "psychology narrowly construed" and the concomitant methodological solipsism of Putnam and Fodor and othemlS). However this fact is compatible with our foundational claim that computational systems are distinguished in virtue of having some version of 4, as part of their characterisation. A very similar point can be made for logic: although any given logic can (presumably) be given a mathematically-specified model theory, that theory doesn't typically tie down what is often called the standard model or interpretation --the interpretation that we use. This fact does not release us, however, from positing as a candidate logic only a formalism that humans can interpret.d. The declarative interpretation 4, cannot be wholly determined independent of *, except in purely declarative languages (such as the x-calculus and logic and so forth). This is to say that without some account of the effect on the processor of one fragment of a whole linguistic structure, it may be impossible to say what that processor will take another fragment as designating. The use of StTQ in LISP is an example; natural language instances will be explored, below.This last point needs a word of explanation. It is of course possible to specify 4, in mathematical terms without any explicit mention of a • -like function; the approach we use in LISP defines both. and in terms of the overarching function • mentioned above, and we could of course simply define 4, without defining . at all. Our i~oint, rather, is that any successful definition of ~, will effectively have to do the work of *, more or less explicidy, either by defining some identifiable relationship, or else by embedding that relationship within the recta-theoretic machinery. We are arguing, in other words, only that the subject we intend * to cover must be treated in some fashion or other.What is perhaps surprising about aII of this machinery is that it must be brought to bear on a purely procedural language --all three relationships (O, 4,, and .) figure crucially in an account even of LISP. we are not suggesting that LzsP is like natural languages: to point out just one crucial difference, there is no way in LISP or in any other programming language (except PROLOG) tO say anything, whereas the ability to say things is clearly a foundational aspect of any human language. The problem in the procedural languages is one of what we may call assertional force; although it is possible to construct a sentence-like expression with a clear declarative semantics (such as some equivalent of "x • 3"), one cannot use it in such a way as to actually mean it --so as to have it carry any assertional weight. For example, it is trivial to set some variable x to a, or to ask whether x is 3, but there is no way to state that x is 3, It should be admitted, however, that computational languages bearing assertional force are under considerable current investigation. This general interest is probably one of the reasons for PaOLOG'S emergent popularity; other computational systems with an explicit declarative character include for example specification languages, data base models, constraint languages, and knowledge representation languages in A.I.We can only assume that the appropriate semantics for all of these formalisms will align even more closely with an illuminating semantics for natural language.What does all of this have to do with natural language, and with computational linguistics? The essential point is this: tf this characterisation of formal systems is tenable, and if the techniques of standard programming language semantics can be fit into this mould, then it may be possible to combine those approaches with the techniques of programming language semantics and of logic and model theories, to construct complex and interacting accounts of * and of 4,. To take just one example, the techniques that are used to construct mathematical accounts of environments and continuations might be brought to bear on the issue of dealing with the complex circumstances involving discourse models, theories of focus in dealing with anaphora, and so on; both cases involve an attempt to construct a recursively specifiable account of non-local interactions among disparate fragments of a composite text.But the contributions can proceed in the other direction as well: even from a very simple application of this framework to this circumstance of LISP, for example, we have been able to show how an accepted computational notion fails to cohere with our attributed linguistically based understanding, involving us in a major reconstruction of LZSP'S foundations. The similarities are striking.Our claim, in sum, is that similar phenomena occur in programming languages and natural languages, and that each discipline could benefit from the semantical techniques developed in the other. Some examples of these similar phenomena will help to motivate this view. The first is the issue ~ t,,. appropriate use of noun phrases: as well as employing a noun phrase in a standard e .~,lnmnal position, natural language semantics has concerned itself with more difficult cases such as intensional contexts (as in the underlined designator in I didn't know The Big Apple was an island. where the co-designating term New York cannot be substituted without changing the meaning), the so-called attributive~referential distinction of Donellan z9 (the difference, roughly, between using a noun phrase like "the man with a martini" to inform you that someone is drinking a martini, as Opposed to a situation where one uses the heater's belief or assumption that someone is drinking a martini to refer to him), and so on. Another example different from either of these is provided by the underlined term in For the next 20 years let's re~trict the president's salary to $20,000, on the reading in which after Reagan is defeated he is allowed to earn as much as he pleases, but his successor comes under our constraint. The analagous computational cases include for example the use of an expression like (the formal analog of) make the sixth array element be 10 (i.e., A(B) ::~ 10). where we mean not that the current sixth element should be 10 (the current sixth array element might at the moment tie 9, and 9 can't be 10), but rather that we would like the description "the sixth array element" to refer to 10 ~so-called "Lvalues", analogous to HACI.ISP'S serf construct). Or, to take a ,:lifferent case, suppose we say set x to the sixth array element (i.e., x :: = A(B)), where we mean not that x should be set to the current sixth array element, but that it should always be equal to that element (stated computationaUy this might be phrased as saying that :~ should track a(6); stated linguistically we might say that X should mean "the sixth array element"). Although this is not a standard type of assignment, the new constraint languages provide exactly such facilities, and macros (classic computational intensional operators) can be used in more traditional languages for such purposes, Or, for a final example, consider the standard dec~ation: z~r[GeA x, in which the term "x" refers neither to the variable itself (variables are variables, not numbers), nor to its current designation, but rather to whatever will satisfy the description "the value of x" at any point in the course of a computation. All in all, we cannot ignore the attempt on the computationalists' part to provide complex mechanisms so strikingly similar to the complex ways we use noun phrases in English.A very different sort of lingusitic phenomenon that occurs in both programming languages and in natural language are what we might call "premature exits": cases where the processing of a local fragment aborts the standard interpretation of an encompassing discourse. If for example I say to you I was walking down the street that leads to the house that Mary's aunt used to ... forget it; [ was taking a walk, then the "forget it" must be used to discard the analysis of some amount of the previous sentence. The grammatical structure of the subsequent phrase determines how much has been discarded, of course; the sentence would still be comprehensible if the phrase "an old house I like" followed the "forget it". We are not accustomed to semantical theories that deal with phenomena like this, of course, but it is clear that any serious attempt to model real language understanding will have to face them. Our present point is merely that continuations z° enable computational formalisms to deal exactly with the computational analogs of this: so-called escape operators like I, IACLISP'S THROW and CATCH and QUIT.In addition, a full semantics of language will want to deal with such sentences as If by "flustrated" you mean what I think, then she was certainly fluslrated. The proper treatment of the first clause in this sentence will presumably involve lots of ",t," sorts of considerations: its contribution to the rcmainder of the sentence has more to do with the mental states of speaker and hearer than with the world being describe by the presumed conversation. Once again, the overarching computational hypothesis suggests that the way these psychological effects must be modelled is in terms of alterations in :he state of an internal process running over a field of computational structures.As well as these specific examples, a couple of more general morals can be drawn, important in that they speak directly to styles of practice that we see in the literature. The first concerns the suggestion, apparently of some currency, that we reject the notion of logical form, and "do semantics directly" in a computational model On our account this is a mistake, pure and simple: to buy into the computational framework is to believe that the ingredients in any computational process are inherently linguistic, in need of interpretation. Thus they too will need semantics; the internalisation of English into a computer (O) is a translation relationship (in the sense of preserving ~, presumably) --even if it is wildly contextual, and even if the internal language is very different in structure from the st.rucmre of English.It has sometimes been informally suggested, in an analogous vein, that Montague semantics cannot be taken seriously computationally, because the models that Montague proposes are "too big" --how could you possibly carry these infinite functions around in your head, we are asked to wonder. But of course this argument comits a use/mention mistake: the only valid computational reading of Montague would mean that mentalse (,~) would consist of designators of the functions Montague propose~ and those designators can of course be a few short formulae, It is another consequence of our view that any semanticist who proposes some kind of "mental structure" in his or her account of language is commited to providing an interpretation of that structure. Consider for example a proposal that posits a notion of "focus" for a discourse fragment. Such a focus might be viewed as a (possibly abstracO entity in the world, or as a element of computational structure playing such-and-such role in the behavioural model of language understanding. It might seem that these are alternative accounts: what our view insists is that an interpretation of the latter must give it a designation (e~); thus there would be a computational structure (being biased, we will call it the focus-designator), and a designation (that we call the focus.itsel]). The complete account of focus would have to specify both of these (either directly, or else by relying on the generic declarative semantics to mediate between them), and also tell a story about how the focus-designator plays a causal role (,I,) in engendering the proper behaviour in the computational model of language understanding.There is one final problem to be considered: what it is to design an internal folvnatism S (the task, we may presume, of anyone designing a knowledge representation language). Since, on our view, we must have a semantics, we have the option either of having the semantics informally described (or, even worse, tacitly assumed), or else we can present an explicit account, either by defining such a story ourselves or by borrowing from someone else. If the LIsp case can be taken as suggestive, a purely declarative model theory will be inadequate to handle the sorts of comptuational interactions that programming languages have required (and there is no a priori reason to assume that successful computational models for natural language will be found that are simpler than the programming languages the community has found necessary for the modest sons of tasks computers are presently able to perform). However it is also reasonable to expect that no direct analog to programming language semantics will suffice, since they have to date been so concerned with purely procedural (behavioural) consequence. It seems at least reasonable to suppose that a general interpretation function, of the z sort mentioned earlier, may be required.Consider for example the ZLONE language presented by Brachman et aL 21 Although no semantics for KLONE has been presented, either procedural or declarative, its proponents have worked both in investigating the o-sehaantics (how to translate English into KLONE), and in developing an informal account of the procedural aspects. Curiously, recent directions in that project would suggest that its authors expect to be able to provide a "declarativeonly" account of KLONE semantics (i.e., expect to be able to present an account of ~, independent of ~,), in spite of our foregoing remarks. Our only comment is to remark that independence of procedural consequence is not a pre-requisite to an adequate semantics; the two can be recursively specifiable together; thus this apparent position is stronger than formally necessary ~ which makes it perhaps of considerable interest.In sum, we claim that any semantical account of either natural language or computational language must specify O, ,I,, and ,~; if any are leR out, the account is not complete. We deny, furthermore, that there is any fundamental distinction to be drawn between so-called procedural languages (of which LISP is the paradigmatic example in A.I.) and other more declarative languages (encodings of logic, or representation languages). We deny as well, contrary to at least some popular belief, the view that a mathcmatically well-specified semantics for a candidate "mcntalese" must bc satisfied by giving an independently specified declarative semantics (as would be possible for an encoding of logic, for example). The designers of zat, zz for example, for principled reasons denied the possibility of giving a semantics indcpendent of the procedures in which the Kat structures participated; our simple account of LISP has at least suggested that such an approach could be pursued on a mathematically sound footing. Note however, in spite of our endorsement of what might be called a procedural semantics, that this in no way frees one from from giving a declarative semantics as well; procedural semantics and declarative semantics are two pieces of a total story; they are not alternatives.* I am grateful to Barbara Grosz and Hector Levesque for their comments on an earlier draft of this short paper, and to Jane Robinson for her original suggestion that it be written.
Appendix:
| null | null | null | null | {
"paperhash": [
"woods|procedural_semantics_as_a_theory_of_meaning.",
"israel|what's_wrong_with_non-monotonic_logic?",
"newell|physical_symbol_systems",
"fodor|methodological_solipsism_considered_as_a_research_strategy_in_cognitive_psychology",
"steele|the_art_of_the_interpreter_or,_the_modularity_complex_(parts_zero,_one,_and_two)",
"bobrow|an_overview_of_krl,_a_knowledge_representation_language",
"maloney|general_semantics.",
"findler|associative_networks-_representation_and_use_of_knowledge_by_computers",
"robins|an_integrated_theory_of_linguistic_descriptions"
],
"title": [
"Procedural Semantics as a Theory of Meaning.",
"What's Wrong with Non-Monotonic Logic?",
"Physical Symbol Systems",
"Methodological solipsism considered as a research strategy in cognitive psychology",
"The Art of the Interpreter or, The Modularity Complex (Parts Zero, One, and Two)",
"An overview of KRL, a Knowledge Representation Language",
"General Semantics.",
"Associative Networks- Representation and Use of Knowledge by Computers",
"An integrated theory of linguistic descriptions"
],
"abstract": [
"Abstract : This report addresses fundamental issues of semantics for computational systems. The question at issue is 'What is it that machines can have that would correspond to the knowledge of meanings that people have and that we seem to refer to by the ordinary language term 'meaning'?' The proposed answer is that the notion of truth-conditions can be explicated and made precise by identifying them with a particular kind of abstract procedure and that such procedures can serve as the meaning bearing elements of a theory of semantics suitable for computer implementation. This theory, referred to as 'procedural semantics', has been the basis of several successful computerized systems and is acquiring increasing interest among philosophers of language. (Author)",
"In this paper, I ask, and attempt to answer, the following question: What's Wrong with Non-Monotonic Logic? The answer, briefly, is that the motivation behind the wonderfully impressive work involved in its development is based on a confusion of proof-theoretic with epistemological issues.",
"On the occasion of a first conference on Cognitive Science, it seems appropriate to review the basis of common understanding between the various disciplines. In my estimate, the most fundamental contribution so far of artificial intelligence and computer science to the joint enterprise of cognitive science has been the notion of a physical symbol system, i.e., the concept of a broad class of systems capable of having and manipulating symbols, yet realizable in the physical universe. The notion of symbol so defined is internal to this concept, so it becomes a hypothesis that this notion of symbols includes the symbols that we humans use every day of our lives. In this paper we attempt systematically, but plainly, to lay out the nature of physical symbol systems. Such a review is in ways familiar, but not thereby useless. Restatement of fundamentals is an important exercise.The views and conclusions contained in this document are those of the author and should not be interpreted as representing the official policies, either expressed or implied, of the Defense Advanced Research Projects Agency, or the U.S. Government.Herb Simon would be a co-author of this paper, except that he is giving his own paper at this conference. The key ideas are entirely joint, as the references indicate.",
"Abstract The paper explores the distinction between two doctrines, both of which inform theory construction in much of modern cognitive psychology: the representational theory of mind and the computational theory of mind. According to the former, propositional attitudes are to be construed as relations that organisms bear to mental representations. According to the latter, mental processes have access only to formal (nonsemantic) properties of the mental representations over which they are defined. The following claims are defended: (1) That the traditional dispute between “rational” and “naturalistic” psychology is plausibly viewed as an argument about the status of the computational theory of mind. Rational psychologists accept a formality condition on the specification of mental processes; naturalists do not. (2) That to accept the formality condition is to endorse a version of methodological solipsism. (3) That the acceptance of some such condition is warranted, at least for that part of psychology which concerns itself with theories of the mental causation of behavior. This is because: (4) such theories require nontransparent taxonomies of mental states; and (5) nontransparent taxonomies individuate mental states without reference to their semantic properties. Equivalently, (6) nontransparent taxonomies respect the way that the organism represents the object of its propositional attitudes to itself, and it is this representation which functions in the causation of behavior. The final section of the paper considers the prospect for a naturalistic psychology: one which defines its generalizations over relations between mental representations and their environmental causes, thus seeking to account for the semantic properties of propositional attitudes. Two related arguments are proposed, both leading to the conclusion that no such research strategy is likely to prove fruitful.",
"We examine the effects of various language design decisions on the programming styles available to a user of the language, with particular emphasis on the ability to incrementally construct modular systems. At each step we exhibit an interactive meta-circular interpreter for the language under consideration. Each new interpreter is the result of an incremental change to a previous interpreter. The interpreters we exhibit are all written in a simple dialect of LISP, and all implement LISP-like languages. A subset of these interpreters constitute a partial historical reconstruction of the actual evolution of LISP.",
"This paper describes KRL, a Knowledge Representation Language designed for use in understander systems. It outlines both the general concepts which underlie our research and the details of KRL-0, an experimental implementation of some of these concepts. KRL is an attempt to integrate procedural knowledge with a broad base of declarative forms. These forms provide a variety of ways to express the logical structure of the knowledge, in order to give flexibility in associating procedures (for memory and reasoning) with specific pieces of knowledge, and to control the relative accessibility of different facts and descriptions. The formalism for declarative knowledge is based on structured conceptual objects with associated descriptions. These objects form a network of memory units with several different sorts of linkages, each having well-specified implications for the retrieval process. Procedures can be associated directly with the internal structure of a conceptual object. This procedural attachment allows the steps for a particular operation to be determined by characteristics of the specific entities involved. The control structure of KRL is based on the belief that the next generation of intelligent programs will integrate data-directed and goal-directed processing by using multi-processing. It provides for a priority-ordered multi-process agenda with explicit (user-provided) strategies for scheduling and resource allocation. It provides procedure directories which operate along with process frameworks to allow procedural parameterization of the fundamental system processes for building, comparing, and retrieving memory structures. Future development of KRL will include integrating procedure definition with the descriptive formalism.",
"ing (selecting) processes were discussed, and certain conclusions generally agreed, e .g . : To an animal his objective world 'is his all' he does not know that he selects he behaves as if Event and Object were identical for him this evaluation is adequate for survival his make-up is suited to this . Man can be aware that he selects human behaviour is such that E & 0 are not treated as identical man uses symbols to represent E & 0, etc . Consequences likely to result from identifying and confusing orders were illustrated by numerous examples . Projection and to-me-ness also came into this meeting's discussions . As a brief 'digression' it was discussed among members that human hunger for the static (something that does not change in a changing world) appears natural to man, and may be considered as a basic human urge ; and that many 'religions' might well be considered as expressions of this urge and attempts to satisfy a natural human hunger . It was pointed out that man appears to be endowed with the necessary mechanism or means to achieve satisfaction, and, provided man fulfills his human destiny, he will find satisfaction for this hunger but not if he 'walks the pre-scientific path through confusion to perdition .' For homework a sheet of paper was handed to each member with the following written thereon : At our next meeting we shall consider differences between sense and nonsense . Can you recognize nonsense when you come across it? Please deal with these 'questions' in writing, as briefly as you can and NOTE CAREFULLY HOW LONG YOU SPEND ON EACH . 1. What is the secret of success? 2. Can religion conquer communism? 3. Will Christianity survive? 4 . If the temperature of some water is 60 degrees Fahr . what is the temperature of the atoms of hydrogen and oxygen which go to make up that water? 5 . If the 3.24 from Victoria was two minutes late on Friday what is the point duty policeman's name? 6 . Should the death sentence be abolished? 7 . How long is a piece of string? 8. Can you think of any circumstances in which this recorded extract from a conversation might make sense? 'Then a screw nicks my snout and puts me in peter .' IV . It was felt as gratifying to the group leader to find that most of the members had treated nearly all the 'questions' as nonsense . Some had given a 'Roland for an Oliver' by making blatant nonsense remarks ; others had written remarks such as, 'I cannot answer the question until I know what you mean by . . .' A few, however, treated 'questions' 1,2,3, 6, as sense . With two exceptions the members labelled 8 as 'possibly underworld, spiv, or prison slang' . The leader interpreted the conversation extract thus 'Then a prison warder confiscated my tobacco and locked me in my cell .' Numbers 18 & 19, 1955-56 The relaxing effect of answered questions was contrasted with the continued states of tension arising from unanswered and unanswerable questions . An apple was passed around for the members to handle, smell, look at, feel. The apple was then placed in the centre of a diagram, adapted from O .R . Bontrager's diagram on page 5, G S BULLETIN 4 & 5 . The apple was labelled Apple, . The process of abstracting in higher orders was illustrated . For homework the members were asked to devise a diagram in a similar manner STARTING WITH AN ACTUAL OBJECT to place in the centre . It was intimated that at the next meeting we would continue our consideration of nonsense . This meeting was rounded off with the following reading from Shakespeare : (Macbeth, Act III, Sc . 1) First Murderer : We are men, my liege . Macbeth : Aye, in the catalogue ye go for men; as hounds and greyhounds, mongrels, spaniels, curs, shoughs, water-rugs, and demiwolves, are slept all by the name of dogs : the valu'd file distinguishes the swift, the slow, the subtle, the housekeeper, the hunter, everyone according to the gift which bounteous nature hath in him clos'd ; whereby he does receive particular addition, from the bill that writes them all alike ; and so of men . V . At the fifth meeting, some diagrams were produced 'that appeared so excellent to the leader that they will be used at other courses in the place of his own crude drawings . With a general show of enthusiasm, we passed on to the main topic for this meeting, 'more nonsense' . The Plogglies story was read from Wendell Johnson's People in quandaries, and a brief resume given of the Brownian movement, also based on Wendell Johnson, pages 76, 71-72 . After this, a home-made model of the apparatus illustrated and described on page 73, G S BULLETIN 12 & 13, was demonstrated . _ 7",
"Upon opening this book and leafing through the pages, one gets the impression of an important compendium. The fourteen articles provide good coverage of semantic networks and related systems for representing knowledge. Their average length of 33 pages is long enough to give each author reasonable scope, yet short enough to permit a variety of viewpoints to be expressed in a single volume. The editor should be commended for his efforts in putting together a wellorganized book instead of just another collection of unrelated papers.",
"The authors offer a theory concerning the nature of a linguistic description, that is, a theoretical statement about the kind of description that a linguist is able to give of a natural language. This theory seeks to integrate the generative conception of phonology and syntax developed by Chomsky and Halle, with the conception of semantics proposed by Katz and Fodor. The authors demonstrate that the integration within one theory of these conceptions of phonology, syntax, and semantics clarifies, further systematizes, and justifies each of them. They also show that such integration sheds considerable light upon the nature of linguistic universals, that is, upon the nature of language. Primary focus is placed on the relation between the syntactic and the semantic components of a linguistic description."
],
"authors": [
{
"name": [
"W. Woods"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"David J. Israel"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"A. Newell"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"J. Fodor"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"G. Steele",
"G. Sussman"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"D. Bobrow",
"T. Winograd"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"Martin Maloney",
"E. A. Lanier",
"Robert K. Straus"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"N. Findler"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"R. Robins",
"J. Katz",
"P. Postal"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
}
],
"arxiv_id": [
null,
null,
null,
null,
null,
null,
null,
null,
null
],
"s2_corpus_id": [
"62650298",
"46224589",
"4642944",
"144790399",
"60658969",
"7965074",
"6383153",
"15616277",
"144828363"
],
"intents": [
[],
[],
[],
[],
[],
[],
[],
[],
[]
],
"isInfluential": [
false,
false,
false,
false,
false,
false,
false,
false,
false
]
} | null | 512 | 0.017578 | null | null | null | null | null | null | null | null |
4118540e02e8613e32f3f748688a8b98d6c3431d | 1360818 | null | The Representation of Inconsistent Information in a Dynamic Model-Theoretic Semantics | Model-theoretic semantics provides a computationally attractive means of representing the semantics of natural language. However, the models used in this formalism are static and are usually infinite. Dynamic models are incomplete models that include only the information needed for an application and to which information can be added. Dynamic models are basically approximations of larger conventional models, but differ is several interesting ways. The difference discussed here is the possibility of inconsistent information being included in the model. If a computation causes the model to expand, the result of that computation may be different than the result of performing that same computation with respect to the newly expanded model (i.e. the result is inconsistent with the information currently in the dynamic model). Mechanisms are introduced to eliminate these local (temporary) inconsistencies, but the most natural mechanism can introduce permanent inconsistencies in the information contained in the dynamic model. These inconsistencies are similar to those that people have in their knowledge and beliefs. The mechanism presented is shown to be related to both the intensional isomorphism and impossible worlds approaches to thi~ problem. | {
"name": [
"Moran, Douglas B."
],
"affiliation": [
null
]
} | null | null | 20th Annual Meeting of the Association for Computational Linguistics | 1982-06-01 | 10 | 1 | null | The difference discussed here is the possibility of inconsistent information being included in the model. If a computation causes the model to expand, the result of that computation may be different than the result of performing that same computation with respect to the newly expanded model (i.e. the result is inconsistent with the information currently in the dynamic model). Mechanisms are introduced to eliminate these local (temporary) inconsistencies, but the most natural mechanism can introduce permanent inconsistencies in the information contained in the dynamic model. These inconsistencies are similar to those that people have in their knowledge and beliefs. The mechanism presented is shown to be related to both the intensional isomorphism and impossible worlds approaches to thi~ problem.In model-theoretic semantics, the semantics of a sentence is represented with a logical formula, and its meaning is the result of evaluating that formula with respect to a logical model. The model-theoretic semantics used here is that given inThe proper treatment of quantification in ordinar~ English (PTQ) [Montague 1973 ], but the problems and results discussed here apply to similar systems and theories.From the viewpoint of natural language understanding, the conventional ~oO~l-theoretic semantics used in descriptive theories has two basic problems: (I) the information contained in a mod~ is complete and unchanging whereas the information possessed by a person listening to an utterance is incomplete and may be changed by the understanding of that utterance, and (2) the models are usually presumed to be infinite, whereas a person possesses only finite information. Dynamic model-theoretic semantics Moran 1978, 1979; Moran 1980 ] addresses these problems by allowing the models to contain incomplete information and to have information added to the model. A dynamic model is a "good enough" approximation to an infinite model when it contains the finite subset of information that is needed to determine the meanings of the sentences actually presented to the system. Dynamic model-theoretic semantics allows the evaluation of a formula to cause the addition of information to the model. This interaction of the evaluation of a formula and the expansion of the model produces several linguistically interesting side-effects, and these have been labelled model-theoretic pra~matics [Moran 19~0 ].One of these effects occurs when the information given by an element of the model is expanded between the time when that element is identified as the denotation of a sub-expression in the formula and the time when it is used in combination with other elements. If the expansion of the model is not properly managed, the result of the evaluation of such a formula can be wrong (i.e. inconsistent with the contents of the model). Two mechanisms for maintaining the correctness of the denotational relationship are presented.In the first, the management of the relationship is external to the model. This mechanism has the disadvantage that it involves high overhead -the denotational relationships must be repeatedly verified, and unnecessary expansions of the model may be performed. The second mechanism is similar to the first, but eliminates much of this overhead: it incorporates the management of the denotational relationship into the model by augmenting the model's structure.It is this second mechanism that is of primary interest.It was added to the system to eliminate a source of immediate errors, but it was found to introduce long-term "errors".These errors are interesting because they are the kinds of errors that people frequently make. The structure added to the model permits it to contain inconsistent pieces of information (the structure of a conventional model prevents this), and the mechanism provides a motivated means for controllin~ which inconsistencies may and may not be entered into the dynamic model. An important subclass of the inconsistencies provided by this mechanism are known as intensional substitution failure and this mechanism can be viewed as a variant of both the "impossible" worlds [e.g. Cresswell 1973: 39-41] and the intenslonal isomorphism [e.g. Lewis 1972] approaches. Since intensionality alone does not provide an account for Intensional substitution failure, this mechanism provides an improved account of propositional attitudes.Finding the argument to which the ~-expression is applied before evaluating the ~-expression is not a viable solution for two reasons.First, some h-expressions are not applied to arguments, but they have the same problem with their denotations changing as the model expands.Second, having to find the argument to which a h-expression is applied eliminates one of the system's major advantages, compositionality.Dynamic models contain incomplete information, and the sets, relations, and functions in these models can be incompletely specified (their domains are usually incomplete).In PTQ, some phrases translate to ~-expressions; other ~-expressions are used to combine and reorder subexpressions.The possible denotations of these ~-expressions are the higher-order elements of the model (sets, relations, and functions).For example, the proper name "John" translates to the logical expression (omitting intensionality for the time being): (I)[~ P P(j)] where P ranges over properties of individuals and has as its denotation the set of properties that John has. The sentence "John talks" translates to:(2)[~ P P(j)](talk) This formula evaluates to true or false depending on whether or not the property that is the denotation of "talk" is in the set of properties that John has.The dynamic model that is used to evaluate (2) may not contain the element that is the denotation of "talk".If so, a problem ensues. If the formula is evaluated left-to-right, the set of properties denoted by the ~ -expression is identified, followed by the evaluation of "talk". This forces the model to expand to contain the property of talking.The addition of this new property expands the domain of the set of properties denoted by "John", thus forcing the expansion of the characteristic function of that set to specify whether or not talking is to be included.However, because the relationship between the Z-expression for "John" and the set of properties denoted is maintained only during the evaluation of the ~-expression (there is no link from the denotation back to the expression that it denotes), there are no restrictions on how the set is to be expanded. Thus, it is possible to define the property of talking to have John talking and to expand the set previously identified as being denoted by "John" to not include talking, or vice versa.If such an expansion were made, the inconsistency would exist only in the evaluation of that particular formula, and not in the model. Subsequent evaluations of the sentence would recompute the denotation of "John" and get the correct set of properties. This is not a problem with the direction of evaluation -the argument to which the ~-expression is applied may occur to the left of that ~-expression, for example: 3[R R R(talk)](AP P(j)) (note: (3) is equivalent to (2) above).The mechanism that evaluates a formula with respect to a model has been augmented with a table that contains each ~-expression and the ima6e of its denotation in the current stage of the dynamic model.When the domain of the ~-expression expands, the correct denotational relationship is maintained by expanding the image in the table using the ~-expression, and then finding the corresponding element in the model.If the element in the model that was the denotation of the h-expression was not expanded in the same way as the image in this table, a new element corresponding to the expanded image is added to the model. This table allows two ~-expressions that initially have the same denotation to have different denotations after the model expands. Since the expansion of elements in the model is undirected, an element that was initially the denotation of a ~-expression may expand into an unused element. The accumulation of unused elements and the repeated comparisions of images in the table to elements in the model frequently imposes a high overhead.The second mechanism for maintaining the correctness of the denotations of ~-expressions basically involves incorporating the table from the first mechanism into the model.In effect, the R-expressions become meanin6ful names for the elements that they denote. These meaningful names are then used to restrict the expansion of the named elements; once an element has been identified as the denotation of a ~-expresslon, it remains its denotation.*In the first mechanism, when the domain of two ~-expressions does not contain any of the elements that distinguish them, they will have the same denotation, and when such a distinguishing element is added to the model, the denotations of the two h-expressions will become different. With meaningful names, this is not possible because the denotational relationship between a h-expression * Meaningful names are also useful for other purposes, such as generating sentences from the information in the model and for providing procedural -rather than declarative representations for the information in the model [Moran 1980 ]. end its denotation in the model is permanent. Since the system cannot anticipate how the model will be expanded, if it is possible to add to the domain of two h-expresslons an element that would distinguish their denotations, those expressions must be treated as having distinct denotations. Thus, all and only the logically-equivalent expressions should be identified as having the same denotation. If two equivalent expressions were not so identified, their denotations would be different elements in the model and this would allow them to be treated differently. For example, if "John and Mary" was not identified to be the same as "Mary and John", it would be possible to have the model contain the inconsistent information that "John and Mary talk" is true and that "Mary and John talk" is false. If two non-equivalent ~-expressions were identified as being equivalent, they would have the same element as their denotation. When an element that would distinguish the denotations of these two expressions was added to the model, the expansion of the element that was serving as both their denotations would be incorrect for one of them and thus introduce an inconsistency. This need to correctly identl~y equivalent expressions presents a problem because even within the subset of expressions that are the translations of English phrases in the PTQ fragment, equivalence is undecldable [Warren 1979] . It is this undecidability that is the basis of the introduction of inconsistencies into the model. To be useful in a natural language understanding system, this mechanism needs to have timely determinations of whether or not two expressions are equivalent, and thus it will use techniques (including heuristics) that will produce false answers for some pairs of expressions. It is the collection of techniques that is used that determines which inconsistencies will and will not be admitted into the model.*Intensional substitution failure occurs when one has different beliefs about intensionallyequivalent propositions. For example, all theorems are intenslonally-equlvalent (each is true in all possible worlds), but it is possible to believe one proposition that is a theorem and not believe another.The techniques used by the second mechanism to identify logically-equivalent formulas can be viewed as similar to Carnap's Intensional isomorphism approach in that it is based on finding equivalences between the constituents and the structures of the expressions being compared. This mechanism can also be viewed as using an * While the fragment of English used in PTQ is large enough to demonstrate the introduction of inconsistent information, it is viewed as not being large enough to permit interesting claims about what are useful techniques for testing equivalences.Consequently, this part of the mechanism has not been implemented."impossible" worlds approach: if two intensionally-equivalent formulas are not identified as being equivalent, the mechanism "thinks" that it is possible to expand their domain to include a distinguishing element. Since the formulas are equivalent in all possible worlds, the expected distinguishing element must be an "impossible" world.The presence of intensional substitution failure is one of the important tests of a theory of propositional attitudes. This mechanism is a correlate of that of Thomason [1980] , with the addition of meaningful names to intensional objects serving the same purpose as Thomason's additional layer of types. | null | null | null | null | Main paper:
:
The difference discussed here is the possibility of inconsistent information being included in the model. If a computation causes the model to expand, the result of that computation may be different than the result of performing that same computation with respect to the newly expanded model (i.e. the result is inconsistent with the information currently in the dynamic model). Mechanisms are introduced to eliminate these local (temporary) inconsistencies, but the most natural mechanism can introduce permanent inconsistencies in the information contained in the dynamic model. These inconsistencies are similar to those that people have in their knowledge and beliefs. The mechanism presented is shown to be related to both the intensional isomorphism and impossible worlds approaches to thi~ problem.In model-theoretic semantics, the semantics of a sentence is represented with a logical formula, and its meaning is the result of evaluating that formula with respect to a logical model. The model-theoretic semantics used here is that given inThe proper treatment of quantification in ordinar~ English (PTQ) [Montague 1973 ], but the problems and results discussed here apply to similar systems and theories.From the viewpoint of natural language understanding, the conventional ~oO~l-theoretic semantics used in descriptive theories has two basic problems: (I) the information contained in a mod~ is complete and unchanging whereas the information possessed by a person listening to an utterance is incomplete and may be changed by the understanding of that utterance, and (2) the models are usually presumed to be infinite, whereas a person possesses only finite information. Dynamic model-theoretic semantics Moran 1978, 1979; Moran 1980 ] addresses these problems by allowing the models to contain incomplete information and to have information added to the model. A dynamic model is a "good enough" approximation to an infinite model when it contains the finite subset of information that is needed to determine the meanings of the sentences actually presented to the system. Dynamic model-theoretic semantics allows the evaluation of a formula to cause the addition of information to the model. This interaction of the evaluation of a formula and the expansion of the model produces several linguistically interesting side-effects, and these have been labelled model-theoretic pra~matics [Moran 19~0 ].One of these effects occurs when the information given by an element of the model is expanded between the time when that element is identified as the denotation of a sub-expression in the formula and the time when it is used in combination with other elements. If the expansion of the model is not properly managed, the result of the evaluation of such a formula can be wrong (i.e. inconsistent with the contents of the model). Two mechanisms for maintaining the correctness of the denotational relationship are presented.In the first, the management of the relationship is external to the model. This mechanism has the disadvantage that it involves high overhead -the denotational relationships must be repeatedly verified, and unnecessary expansions of the model may be performed. The second mechanism is similar to the first, but eliminates much of this overhead: it incorporates the management of the denotational relationship into the model by augmenting the model's structure.It is this second mechanism that is of primary interest.It was added to the system to eliminate a source of immediate errors, but it was found to introduce long-term "errors".These errors are interesting because they are the kinds of errors that people frequently make. The structure added to the model permits it to contain inconsistent pieces of information (the structure of a conventional model prevents this), and the mechanism provides a motivated means for controllin~ which inconsistencies may and may not be entered into the dynamic model. An important subclass of the inconsistencies provided by this mechanism are known as intensional substitution failure and this mechanism can be viewed as a variant of both the "impossible" worlds [e.g. Cresswell 1973: 39-41] and the intenslonal isomorphism [e.g. Lewis 1972] approaches. Since intensionality alone does not provide an account for Intensional substitution failure, this mechanism provides an improved account of propositional attitudes.Finding the argument to which the ~-expression is applied before evaluating the ~-expression is not a viable solution for two reasons.First, some h-expressions are not applied to arguments, but they have the same problem with their denotations changing as the model expands.Second, having to find the argument to which a h-expression is applied eliminates one of the system's major advantages, compositionality.Dynamic models contain incomplete information, and the sets, relations, and functions in these models can be incompletely specified (their domains are usually incomplete).In PTQ, some phrases translate to ~-expressions; other ~-expressions are used to combine and reorder subexpressions.The possible denotations of these ~-expressions are the higher-order elements of the model (sets, relations, and functions).For example, the proper name "John" translates to the logical expression (omitting intensionality for the time being): (I)[~ P P(j)] where P ranges over properties of individuals and has as its denotation the set of properties that John has. The sentence "John talks" translates to:(2)[~ P P(j)](talk) This formula evaluates to true or false depending on whether or not the property that is the denotation of "talk" is in the set of properties that John has.The dynamic model that is used to evaluate (2) may not contain the element that is the denotation of "talk".If so, a problem ensues. If the formula is evaluated left-to-right, the set of properties denoted by the ~ -expression is identified, followed by the evaluation of "talk". This forces the model to expand to contain the property of talking.The addition of this new property expands the domain of the set of properties denoted by "John", thus forcing the expansion of the characteristic function of that set to specify whether or not talking is to be included.However, because the relationship between the Z-expression for "John" and the set of properties denoted is maintained only during the evaluation of the ~-expression (there is no link from the denotation back to the expression that it denotes), there are no restrictions on how the set is to be expanded. Thus, it is possible to define the property of talking to have John talking and to expand the set previously identified as being denoted by "John" to not include talking, or vice versa.If such an expansion were made, the inconsistency would exist only in the evaluation of that particular formula, and not in the model. Subsequent evaluations of the sentence would recompute the denotation of "John" and get the correct set of properties. This is not a problem with the direction of evaluation -the argument to which the ~-expression is applied may occur to the left of that ~-expression, for example: 3[R R R(talk)](AP P(j)) (note: (3) is equivalent to (2) above).The mechanism that evaluates a formula with respect to a model has been augmented with a table that contains each ~-expression and the ima6e of its denotation in the current stage of the dynamic model.When the domain of the ~-expression expands, the correct denotational relationship is maintained by expanding the image in the table using the ~-expression, and then finding the corresponding element in the model.If the element in the model that was the denotation of the h-expression was not expanded in the same way as the image in this table, a new element corresponding to the expanded image is added to the model. This table allows two ~-expressions that initially have the same denotation to have different denotations after the model expands. Since the expansion of elements in the model is undirected, an element that was initially the denotation of a ~-expression may expand into an unused element. The accumulation of unused elements and the repeated comparisions of images in the table to elements in the model frequently imposes a high overhead.The second mechanism for maintaining the correctness of the denotations of ~-expressions basically involves incorporating the table from the first mechanism into the model.In effect, the R-expressions become meanin6ful names for the elements that they denote. These meaningful names are then used to restrict the expansion of the named elements; once an element has been identified as the denotation of a ~-expresslon, it remains its denotation.*In the first mechanism, when the domain of two ~-expressions does not contain any of the elements that distinguish them, they will have the same denotation, and when such a distinguishing element is added to the model, the denotations of the two h-expressions will become different. With meaningful names, this is not possible because the denotational relationship between a h-expression * Meaningful names are also useful for other purposes, such as generating sentences from the information in the model and for providing procedural -rather than declarative representations for the information in the model [Moran 1980 ]. end its denotation in the model is permanent. Since the system cannot anticipate how the model will be expanded, if it is possible to add to the domain of two h-expresslons an element that would distinguish their denotations, those expressions must be treated as having distinct denotations. Thus, all and only the logically-equivalent expressions should be identified as having the same denotation. If two equivalent expressions were not so identified, their denotations would be different elements in the model and this would allow them to be treated differently. For example, if "John and Mary" was not identified to be the same as "Mary and John", it would be possible to have the model contain the inconsistent information that "John and Mary talk" is true and that "Mary and John talk" is false. If two non-equivalent ~-expressions were identified as being equivalent, they would have the same element as their denotation. When an element that would distinguish the denotations of these two expressions was added to the model, the expansion of the element that was serving as both their denotations would be incorrect for one of them and thus introduce an inconsistency. This need to correctly identl~y equivalent expressions presents a problem because even within the subset of expressions that are the translations of English phrases in the PTQ fragment, equivalence is undecldable [Warren 1979] . It is this undecidability that is the basis of the introduction of inconsistencies into the model. To be useful in a natural language understanding system, this mechanism needs to have timely determinations of whether or not two expressions are equivalent, and thus it will use techniques (including heuristics) that will produce false answers for some pairs of expressions. It is the collection of techniques that is used that determines which inconsistencies will and will not be admitted into the model.*Intensional substitution failure occurs when one has different beliefs about intensionallyequivalent propositions. For example, all theorems are intenslonally-equlvalent (each is true in all possible worlds), but it is possible to believe one proposition that is a theorem and not believe another.The techniques used by the second mechanism to identify logically-equivalent formulas can be viewed as similar to Carnap's Intensional isomorphism approach in that it is based on finding equivalences between the constituents and the structures of the expressions being compared. This mechanism can also be viewed as using an * While the fragment of English used in PTQ is large enough to demonstrate the introduction of inconsistent information, it is viewed as not being large enough to permit interesting claims about what are useful techniques for testing equivalences.Consequently, this part of the mechanism has not been implemented."impossible" worlds approach: if two intensionally-equivalent formulas are not identified as being equivalent, the mechanism "thinks" that it is possible to expand their domain to include a distinguishing element. Since the formulas are equivalent in all possible worlds, the expected distinguishing element must be an "impossible" world.The presence of intensional substitution failure is one of the important tests of a theory of propositional attitudes. This mechanism is a correlate of that of Thomason [1980] , with the addition of meaningful names to intensional objects serving the same purpose as Thomason's additional layer of types.
Appendix:
| null | null | null | null | {
"paperhash": [
"maloney|general_semantics."
],
"title": [
"General Semantics."
],
"abstract": [
"ing (selecting) processes were discussed, and certain conclusions generally agreed, e .g . : To an animal his objective world 'is his all' he does not know that he selects he behaves as if Event and Object were identical for him this evaluation is adequate for survival his make-up is suited to this . Man can be aware that he selects human behaviour is such that E & 0 are not treated as identical man uses symbols to represent E & 0, etc . Consequences likely to result from identifying and confusing orders were illustrated by numerous examples . Projection and to-me-ness also came into this meeting's discussions . As a brief 'digression' it was discussed among members that human hunger for the static (something that does not change in a changing world) appears natural to man, and may be considered as a basic human urge ; and that many 'religions' might well be considered as expressions of this urge and attempts to satisfy a natural human hunger . It was pointed out that man appears to be endowed with the necessary mechanism or means to achieve satisfaction, and, provided man fulfills his human destiny, he will find satisfaction for this hunger but not if he 'walks the pre-scientific path through confusion to perdition .' For homework a sheet of paper was handed to each member with the following written thereon : At our next meeting we shall consider differences between sense and nonsense . Can you recognize nonsense when you come across it? Please deal with these 'questions' in writing, as briefly as you can and NOTE CAREFULLY HOW LONG YOU SPEND ON EACH . 1. What is the secret of success? 2. Can religion conquer communism? 3. Will Christianity survive? 4 . If the temperature of some water is 60 degrees Fahr . what is the temperature of the atoms of hydrogen and oxygen which go to make up that water? 5 . If the 3.24 from Victoria was two minutes late on Friday what is the point duty policeman's name? 6 . Should the death sentence be abolished? 7 . How long is a piece of string? 8. Can you think of any circumstances in which this recorded extract from a conversation might make sense? 'Then a screw nicks my snout and puts me in peter .' IV . It was felt as gratifying to the group leader to find that most of the members had treated nearly all the 'questions' as nonsense . Some had given a 'Roland for an Oliver' by making blatant nonsense remarks ; others had written remarks such as, 'I cannot answer the question until I know what you mean by . . .' A few, however, treated 'questions' 1,2,3, 6, as sense . With two exceptions the members labelled 8 as 'possibly underworld, spiv, or prison slang' . The leader interpreted the conversation extract thus 'Then a prison warder confiscated my tobacco and locked me in my cell .' Numbers 18 & 19, 1955-56 The relaxing effect of answered questions was contrasted with the continued states of tension arising from unanswered and unanswerable questions . An apple was passed around for the members to handle, smell, look at, feel. The apple was then placed in the centre of a diagram, adapted from O .R . Bontrager's diagram on page 5, G S BULLETIN 4 & 5 . The apple was labelled Apple, . The process of abstracting in higher orders was illustrated . For homework the members were asked to devise a diagram in a similar manner STARTING WITH AN ACTUAL OBJECT to place in the centre . It was intimated that at the next meeting we would continue our consideration of nonsense . This meeting was rounded off with the following reading from Shakespeare : (Macbeth, Act III, Sc . 1) First Murderer : We are men, my liege . Macbeth : Aye, in the catalogue ye go for men; as hounds and greyhounds, mongrels, spaniels, curs, shoughs, water-rugs, and demiwolves, are slept all by the name of dogs : the valu'd file distinguishes the swift, the slow, the subtle, the housekeeper, the hunter, everyone according to the gift which bounteous nature hath in him clos'd ; whereby he does receive particular addition, from the bill that writes them all alike ; and so of men . V . At the fifth meeting, some diagrams were produced 'that appeared so excellent to the leader that they will be used at other courses in the place of his own crude drawings . With a general show of enthusiasm, we passed on to the main topic for this meeting, 'more nonsense' . The Plogglies story was read from Wendell Johnson's People in quandaries, and a brief resume given of the Brownian movement, also based on Wendell Johnson, pages 76, 71-72 . After this, a home-made model of the apparatus illustrated and described on page 73, G S BULLETIN 12 & 13, was demonstrated . _ 7"
],
"authors": [
{
"name": [
"Martin Maloney",
"E. A. Lanier",
"Robert K. Straus"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
}
],
"arxiv_id": [
null
],
"s2_corpus_id": [
"6383153"
],
"intents": [
[
"background"
]
],
"isInfluential": [
false
]
} | null | 512 | 0.001953 | null | null | null | null | null | null | null | null |
579e456e69b9a77a54a7d0392f9d412568e1cd51 | 17676483 | null | The Text System for Natural Language Generation: An Overview | Computer-based generation of natural language requires consideration of two different types of problems: i) determining the content and textual shape of what is to be said, and 2) transforming that message into English. A computational solution to the problems of deciding what to say and how to organize it effectively is proposed that relies on an interaction between structural and semantic processes. Schemas, which encode aspects of discourse structure, are used to guide the generation process. A focusing mechanism monitors the use of the schemas, providing constraints on what can be said at any point. These mechanisms have been implemented as part of a generation method within the context of a natural language database system, addressing the specific problem of responding to questions about database structure. | {
"name": [
"McKeown, Kathleen R."
],
"affiliation": [
null
]
} | null | null | 20th Annual Meeting of the Association for Computational Linguistics | 1982-06-01 | 12 | 30 | null | Deciding what to say and how to organize it effectively are two issues of particular importance to the generation of natural language text.In the past, researchers have concentrated on local issues concerning the syntactic and lexical choices involved in transforming a pre-determined message into natural language. The research described here ~nphasizes a computational Solution to the more global problems of determining the content and textual shape of what is to be said. ~re specifically, my goals have been the development and application of principles of discourse structure, discourse coherency, and relevancy criterion to the computer generation of text. These principles have been realized in the TEXT system, reported on in this paper.The main features of the generation method used in TEXT include I) an ability to select relevant information, 2) a system for pairing rhetorical techniques (such as analogy) with discourse purv~ses (such as defining terms) and 3) a focusing mec~mnism. Rhetorical techniques, which encode aspects of discourse structure, guide the selection of information for inclusion in the text from a relevant knowledge poq~l -a subset of *This work was partially supported by National Science ~Dundation grant #MCS81-07290.the knowledge base which contains information relevant to the discourse purpose.The focusing mechanism helps maintain discourse coherency.It aids in the organization of the message by constraining the selection of information to be talked about next to that which ties in with the previous discourse in an appropriate way. These processes are described in more detail after setting out the framework of the system. | In answer ing a question about database structure, TEXT identifies those rhetorical techniques that could be used for presenting an appropriate answer.On the basis of the input question, semantic processes produce a relevant knowledge pool. A characterization of the information in this pool is then used to select a single partially ordered set of rhetorical techniques from the various possibilities.A formal representation of the answer (called a "message" ) is constructed by selecting propositions from the relevant knowledge pool which match the rhetorical techniques in the given set. The focusing mechanism monitors the matching process; where there are choices for what to say next (i.e.-either alternative techniques are possible or a single tec~mique matches several propositions in the knowledge pool), the focusing mechanism selects that proposition which ties in most closely with the previous discourse. Once the message has been constructed, the system passes the message to a tactical component [BOSSIE 81 ] which uses a functional grammar [KAY 79] to translate the message into English. | Answering questions about the structure of the database requires access to a high-level description of the classes of objects ino the database, their properties, and the relationships between them. The knowledge base used for the TEXT system is a standard database model which draws primarily from representations developed by Chen [CHEN 76] course of an answer.The relevant knowledge pool is constructed by a fairly simple process. For requests for definitions or available information, the area around the questioned object containing the information immediately associated with the entity (e.g. its superordinates, sub-types, and attributes) is circumscribed and partitioned from the remainir~ knowledge base. For questions about tk~ difference between entities, the information included in the relevant knowledge pool depends on how close in the generalization hierarchy t~ two entities are. For entities that are very similar, detailed attributive information is included. For entities that are very different, only generic class information is included. A combination of this information is included for entities falling between t~se two extremes.(See [MCKEOWN 82] for further details).6.0 R~LETORICAL PREDICATES ~%etorical predicates are the means which a speaker has for describing information. ~hey characterize the different types of predicating acts s/he may use and delineate the structural relation between propositions in a text. some examples are "analogy" (comparison with a familiar object), "constituency" (description of sub-parts or sub-types), and "attributive" (associating properties with an entity or event).Linguistic discussion of such predicates (e.g. [GRIMES 75] , [SHEPHERD 26]) indicates that some combinations are preferable to others. Moreover, Grimes claims that predicates are recursive and can be used to identify the organization of text on any level (i.e.-proposition, sentence, paragraph, or longer sequence of text), alti~ugh he does not show how.I have examined texts and transcripts and have found that not only are certain combinations of rhetorical tec~miques more likely than others, certain ones are more appropriate in some discourse situations than others. For example, I found that objects were frequently defined by employing same combination of the following means: (i) identifying an item as a memDer of some generic class,describing an object's function, attributes, and constituency (either physical or class),(3) making analogies to familiar objects, and (4) providing examples. These techniques were rarely used in random order; for instance, it was common to identify an item as a member of some generic class before providing examples.In the TEXT system, these types of standard patterns of discourse structure have been captured in schemas associated with explicit discourse purposes.The schemas loosely identify normal patterns of usage. The~ are not intended to serve as grammars of text.The schema shown be-~ ~rves the purposes o~ providing definitions:Identification Schema identification (class&attribute/function) [analogy~constituency~attributive]* [particular-illustration~evidence]+ {amplification~analogy~attributive} {particular-illustration/evidence}Here, "{ ]" indicates optionality, "/" indicates alternatives, "+" indicates that the item may appear l-n times, and "*" indicates that the item may appear O-n times. The order of the predicates indicates that the normal pattern of definitions is an identifying pro~'~tion followed by any number of descriptive predicates.The speaker then provides one or more examples and can optionally close with some additional descriptive information and possibly another example.TEXT's response to the question "What is a ship?" (shown below) was generated using the identification schema. ~e sentences are numbered to show the correspondence between each sentence and the predicate it corresponds to in the instantiated schema (tile numbers do not occur in the actual output). TEXT'S response to the question "What do you know about vehicles?" was generated using the constituency schema. It is shown below along with the predicates that were instantiated for the answer. 2) qhere are 2-types of vehicl~s in the ONR data~]se: aircraft and water-going vehicles.3) The water-going vehicle has DB attributes that provide information on TRAVEL MEANS and WATER GOING OPERATION. 4) The ~ircraft has DB ° attributes --that provide information on TRAVEL MEANSf FLIGHT RADIUS, CEILING and ROLE. Other DB attributes -of the vehicle include FUEL( FUEL_CAP~EITY and FUEL_TYPE) and FLAG.Two other strategies were identified in the texts examined. These are encoded in the attributive schema, which is used to provide detailed information about a particular aspect of an entity, and the compar e and contrast schema, which encodes a strategy --~r contrasting two entities using a description of their similarities and their differences.For more detail on these strategies, see [MCKEGWN 82 ].As noted earlier, an examination of texts revealed that different strategies were used in different situations.In TEXT, this association of technique with discourse purpose is achieved by associating the different schemas with different question-types.For example, if the question involves defining a term, a different set of schemas (and therefore rhetorical techniques) is chosen than if the question involves describing the type of information available in the database.The identification schema can be used in response to a request for a definition.The purpose of the attributive schema is to provide detailed information about one particular aspect of any concept and it can therefore be used in response to a request for information. In situations where an object or concept can be described in terms of its sub-parts or sub-classes, the constituency schema is used. It may be selected in response to requests for either definitions or information.The compare and contrast schema is used in response ~o a questl'i'~ about the difference between objects.A surmary of the assignment of schemas to question-types is shown in Figure 2 .Schemas used for TEXT i.3. Once a question has been posed to TEXT, a schema must be selected for the response structure which will then be used to control the decisions involved in deciding what to say when. On the basis of the given question, a set of schemas is selected as possible structures for the response. This set includes those sch~nas associated with the given question-type (see Figure 2 above) . A single schema is selected out of this set on the basis of the information available to answer the question.For example, in response to requests for definitions, the constituency schema is selected when the relevant knowledge pool contains a rich description of the questioned object's sub-classes and less information about the object itself. When this is not the case, the identification schema is used.The test for what kind of information is available is a relatively simple one. If the questioned object occurs at a higher level in the hierarchy than a pre-determined level, the constituency schema is used. Note that the higher an entity occurs in the hierarchy, the less descriptive information is available about the entity itself. More information is available about its sub-parts since fewer common features are associated with entities higher in the hierarchy.This type of semantic and structural interaction means that a different schema may be used for answering the same type of question. An earlier example showed that the identification schema was selected by the TEXT system in response to a request for a definition of a ship. In response to a request for a definition of a guided projectile (shown below), the constituency schema is selected since more information is available about the sub-classes of the guided projectile than about the guided projectile itself.Schema selected: Constituency i) identification 2) constituency 3) identification 4) identification 5) evidence 6) evidence 7) attributive I) A guided projectile is a projectile that is self-propelled. 2) There are 2 types of guided projectiles in the ONR database: torpedoes and missiles. 3) The missile has a target location in the air or on the earth's surface.4) The torpedo has an underwater target location.5 Once a schema has been selected, it is filled by matching the predicates it contains against the relevant knowledge pool. The semantics of each predicate define the type of information it can match in the knowledge pool.The semantics defined for TEXT are particular to the database query dumain and would have to be redefined if the schemas were to be used in another type of system (such as a tutorial system, for example).The semantics are not particular, however, to the domain of the database. When transferring the system from one database to another, the predicate semantics would not have to be altered.A proposition is an instantiated predicate; predicate arguments have been filled with values from the knowledge base. An instantiation of the identification predicate is shown below along with its eventual translation. The schema is filled by stepping through it, using the predicate s~nantics to select information which matches the predicate arguments. In places where alternative predicates occur in the schema, all alternatives are matched against the relevant knowledge pool producing a set of propositions. The focus constraints are used to select the most appropriate proposition.The schemas were implemented using a formalism similar to an augmented transition network (ATN). Taking an arc corresponds to the selection of a proposition for the answer. States correspond to filled stages of the schema. The main difference between the TEXT system implementation and a usual ATN, however, is in the control of alternatives.Instead of uncontrolled backtracking, TEXT uses one state lookahead. From a given state, it explores all possible next states and chooses among them using a function that encodes the focus constraints. This use of one state lookahead increases the efficiency of the strategic component since it eliminates unbounded non-determinism. | In order to test generation principles, the TEXT system was developed as part of a natural language interface to a database system, addressing the specific problem of generating answers to questions about database structure. Three classes of questions have been considered: questions about information available in the database, requests for definitions, and questions about the differences between database entities [MCKE(3WN 80] .In this context, input questions provide the initial motivation for speaking.the specific application of answering questions about database structure was used primarily for testing principles about text generation, it is a feature that many users of such systems would like.Several experiments ([MALHOTRA 75] , [TENNANT 79] ) have shown that users often ask questions to familiarize themselves with the database structure before proceeding to make requests about the database contents.The three classes of questions considered for this system were among those shown to be needed in a natural language database system. Implementation of the TEXT system for natural language generation used a portion of the Office of Naval Research (ONR) database containing information about vehicles and destructive devices. Some examples of questions that can be asked of the system include:> What is a frigate? > What do you know about submarines? > What is the difference between a and a kitty hawk?whiskyThe kind of generation of which the system is capable is illustrated by the response it generates to question (A) below.All entities in the (INR database have DB attributes R~MARKS.There are 2 types of entities in the ONR database: destructive devices and vehicles.The vehicle has DB attributes that provide information on SPEED-INDICES and TRAVEL-MEANS. The destructive device has DB attributes that provide information on LETHAL-INDICES.TEXT does not itself contain a facility for interpreting a user's questions. Questions must be phrased using a simple functional notation (shown below) which corresponds to the types of questions that can be asked . It is assumed that a component could be built to perform this type of task and that the decisions it must make would not affect the performance of the generation system.where <e>, <el>, <e2> represent entities in the database.So far, a speaker has been shown to be limited in many ways.For example, s/he is limited by the goal s/he is trying to achieve in the current speech act. TEXT's goal is to answer the user's current question.To achieve that goal, the speaker has limited his/her scope of attention to a set of objects relevant to this goal, as represented by global focus or the relevant knowledge pool.The speaker is also limited by his/her higher-level plan of how to achieve the goal.In TEXT, this plan is the chosen schema. Within these constraints, however, a speaker may still run into the problem of deciding what to say next.A focusing mechanism is used to provide further constraints on what can be said. The focus constraints used in TEXT are immediate, since they use the most recent proposition (corresponding to a sentence in the ~glish answer) to constrain the next utterance. Thus, as the text is constructed, it is used to constrain what can be said next.Sidner [SIDNER 79 ] used three pieces of information for tracking immediate focus: the immediate focus of a sentence (represented by the current focus -CF), the elements of a sentence ~---I~hare potential candidates for a change in focus (represented by a potential focus list -PFL), and past immediate focY [re--pr--esent--'-~--6y a focus stack).She showed that a speaker has the 3~6~win-g'~tions from one sentence to the next: i) to continue focusing on the same thing, 2) to focus on one of the items introduced in the last sentence, 3) to return to a previous topic in ~lich case the focus stack is popped, or 4) to focus on an item implicitly related to any of these three options. Sidner's work on focusing concerned the inter~[e__tation of anaphora. She says nothing about which of these four options is preferred over others since in interpretation the choice has already been made.For generation, ~.~ver, a speaker may have to choose between these options at any point, given all that s/he wants to say. The speaker may be faced with the following choices: i) continuing to talk about the same thing (current-focus equals current-focus of the previous sentence) or starting to talk about something introduced in the last sentence (current-focus is a member of potential-focus-list of the previous sentence) and 2) continuing to talk about the same thing (current focus remains the same) or returning to a topic of previous discussion (current focus is a member of the focus-stack).When faced with the choice of remaining on the same topic or switching to one just introduced, I claim a speaker's preference is to switch. If the speaker has sanething to say about an item just introduced and does not present it next, s/he must go to the trouble of re-introducing it later on. If s/he does present information about the new item first, however, s/he can easily continue where s/he left off by following Sidner's legal option #3. ~qus, for reasons of efficiency, the speaker should shift focus to talk about an item just introduced when s/he has something to say about it.When faced with the choice of continuing to talk about the same thing or returning to a previous topic of conversation, I claim a speaker's preference is to remain on the same topic. Having at some point shifted focus to the current focus, the speaker has opened a topic for conversation. By shifting back to the earlier focus, the speaker closes this new topic, implying that s/he has nothing more to say about it when in fact, s/he does.Therefore, the speaker should maintain the current focus when possible in order to avoid false implication of a finished topic.These two guidelines for changing and maintaining focus during the process of generating language provide an ordering on the three basic legal focus moves that Sidner specifies: I.3. change focus to member of previous potential focus list if possible -CF (new sentence) is a member of PFL (last sentence) maintain focus if possible -CF (new sentence) = CF (last sentence) return to topic of previous discussion -CF (new sentence) is a member of focus-stack I have not investigated the problem of incorporating focus moves to items implicitly associated with either current loci, potential focus list members, or previous foci into this scheme. This remains a topic for future research.Even these guidelines, however, do not appear to be enough to ensure a connected discourse. Although a speaker may decide to focus on a specific entity, s/he may want to convey information about several properties of that entity.S/he will describe related properties of the entity before describing other properties.Thus, strands of semantic connectivity will occur at more than one level of the discourse.An example of this phenomenon is given in dialogues (A) and (B) below. In both, the discourse is focusing on a single entity (the balloon), but in (A) properties that must be talked about are presented randomly.In (B), a related set of properties (color) is discussed before the next set (size). (B), as a result, is more connected than (A).(A) The balloon was red and white striped. Because this balloon was designed to carry men, it had to be large.It had a silver circle at the top to reflect heat. In fact, it was larger than any balloon John had ever seen.(B) The balloon was red and white striped. It had a silver circle at the top to reflect heat. Because this balloon was designed to carry men, it had to be large. In fact, it was larger than any balloon John had ever seen.In the generation process, this phenomenon is accounted for by further constraining the choice of what to talk about next to the proposition with the greatest number of links to the potential focus list.TEXT uses the legal focus moves identified by Sidner by only matching schema predicates against propositions which have an argument that can be focused in satisfaction of the legal options. Thus, the matching process itself is constrained by the focus mechanism.The focus preferences developed for generation are used to select between remaining options. These options occur in TEXT when a predicate matches more than one piece of information in the relevant knowledge pool or when more ~,an one alternative in a schema can be satisfied. In such cases, the focus guidelines are used to select the most appropriate proposition. When options exist, all propositions are selected which have as focused argument a member of the previous PFL. If none exist, then whose focused current-focus. propositions are is a member of filtering steps possibilities to proposition with all propositions are selected argument is the previous If none exist, then all selected whose focused argument the focus-stack.If these do not narrow down the a single proposition, that the greatest number of links to the previous PFL is selected for the answer. Tne focus and potential focus list of each proposition is maintained and passed to the tactical component for use in selecting syntactic constructions and pronominalization.Interaction of the focus constraints with the schemas means that although the same schema may be selected for different answers, it can be instantiated" in different ways. Recall that the identification schema was selected in response to the question "What is a ship?" and the four predicates, identification, evidence, attributive, and ~articular-illustrati0n, were instantiated. Tne identification schema was also selected in response to the question "What is an aircraft carrier?", but different predicates were instantiated as a result of the focus constraints:(definition AIRCRAFT-CARRIER) Schema selected: identification I) identification 2) analogy 3) particular-illustration 4) amplification 5) evidence i) An aircraft carrier is a surface ship with a DISPLACEMENT between 78000 and 80800 and a LENGTH between 1039 and 1063. 2) Aircraft carriers have a greater LENGTH than all other ships and a " greater DISPLACEMENT than most other ships. 3) Mine warfare ships, for example, have a DISPLACF24ENT of 320 and a LENGTH of 144. 4) All aircraft carriers in the ONR database have REMARKS of 0, FUEL TYPE of BNKR, FLAG of BLBL, BEAM of 252, ENDU--I~NCE RANGE of 4000, ECONOMIC SPEED of 12, ENDURANCE SPEED of 30 and PRO~LSION of STMTURGRD. 5)--A ship is classified as an aircraft carrier if the characters 1 through 2 of its HULL NO are CV.Several possibilities for further development of the research described here include i) the use of the same strategies for responding to questions about attributes, events, and relations as well as to questions about entities, 2) investigation of strategies needed for responding to questions about the system processes (e.g. How is manufacturer ' s cost determined?) or system capabilities (e.g.Can you handle ellipsis?) , 3) responding to presuppositional failure as well as to direct questions, and 4) the incorporation of a user model in the generation process (currently TEXT assumes a static casual, naive user and gears its responses to this characterization). Tnis last feature could be used, among other ways, in determining the amount of detail required (see [ MCKEOWN 82 ] for discussion of the recursive use of the sch~nas). | The TEXT system successfully incorporates principles of relevancy criteria, discourse structure, and focus constraints into a method for generating English text of paragraph length. Previous work on focus of attention has been extended for the task of generation to provide constraints on what to say next. Knowledge about discourse structure has been encoded into schemas that are used to guide the generation process.The use of these two interacting mechanisms constitutes a departure from earlier generation systems. The approach taken in this research is that the generation process should not simply trace the knowledge representation to produce text.Instead, communicative strategies people are familiar with are used to effectively convey information. This means that the same information may be described in different ways on different occasions.The result is a system which constructs and orders a message in response to a given question. Although the system was designed to generate answers to questions about database structure (a feature lacking in most natural language database systems), the same techniques and principles could be used in other application areas (for example, computer assisted instruction systems, expert systems, etc.) where generation of language is needed. ~owl~~ I would like to thank Aravind Joshi, Bonnie Webber, Kathleen McCoy, and Eric Mays for their invaluable comments on the style and content of this paper.Thanks also goes to Kathleen Mccoy and Steven Bossie for their roles in implementing portions of the sys~om.[MALHOTRA 75]. Malhotra, A. "Design criteria for a knowledge-based English language system for management: an experimental analysis." MAC TR-146, MIT, Cambridge, Mass. (1975) .[ [TENNANT 79]. Tennant, H., "Experience with the evaluation of natural language question answerers." Working paper #18, Univ. of Illinois, Urbana-Champaign, Ill. (1979) . | Main paper:
introduction:
Deciding what to say and how to organize it effectively are two issues of particular importance to the generation of natural language text.In the past, researchers have concentrated on local issues concerning the syntactic and lexical choices involved in transforming a pre-determined message into natural language. The research described here ~nphasizes a computational Solution to the more global problems of determining the content and textual shape of what is to be said. ~re specifically, my goals have been the development and application of principles of discourse structure, discourse coherency, and relevancy criterion to the computer generation of text. These principles have been realized in the TEXT system, reported on in this paper.The main features of the generation method used in TEXT include I) an ability to select relevant information, 2) a system for pairing rhetorical techniques (such as analogy) with discourse purv~ses (such as defining terms) and 3) a focusing mec~mnism. Rhetorical techniques, which encode aspects of discourse structure, guide the selection of information for inclusion in the text from a relevant knowledge poq~l -a subset of *This work was partially supported by National Science ~Dundation grant #MCS81-07290.the knowledge base which contains information relevant to the discourse purpose.The focusing mechanism helps maintain discourse coherency.It aids in the organization of the message by constraining the selection of information to be talked about next to that which ties in with the previous discourse in an appropriate way. These processes are described in more detail after setting out the framework of the system.
application:
In order to test generation principles, the TEXT system was developed as part of a natural language interface to a database system, addressing the specific problem of generating answers to questions about database structure. Three classes of questions have been considered: questions about information available in the database, requests for definitions, and questions about the differences between database entities [MCKE(3WN 80] .In this context, input questions provide the initial motivation for speaking.the specific application of answering questions about database structure was used primarily for testing principles about text generation, it is a feature that many users of such systems would like.Several experiments ([MALHOTRA 75] , [TENNANT 79] ) have shown that users often ask questions to familiarize themselves with the database structure before proceeding to make requests about the database contents.The three classes of questions considered for this system were among those shown to be needed in a natural language database system. Implementation of the TEXT system for natural language generation used a portion of the Office of Naval Research (ONR) database containing information about vehicles and destructive devices. Some examples of questions that can be asked of the system include:> What is a frigate? > What do you know about submarines? > What is the difference between a and a kitty hawk?whiskyThe kind of generation of which the system is capable is illustrated by the response it generates to question (A) below.All entities in the (INR database have DB attributes R~MARKS.There are 2 types of entities in the ONR database: destructive devices and vehicles.The vehicle has DB attributes that provide information on SPEED-INDICES and TRAVEL-MEANS. The destructive device has DB attributes that provide information on LETHAL-INDICES.TEXT does not itself contain a facility for interpreting a user's questions. Questions must be phrased using a simple functional notation (shown below) which corresponds to the types of questions that can be asked . It is assumed that a component could be built to perform this type of task and that the decisions it must make would not affect the performance of the generation system.where <e>, <el>, <e2> represent entities in the database.
system overview:
In answer ing a question about database structure, TEXT identifies those rhetorical techniques that could be used for presenting an appropriate answer.On the basis of the input question, semantic processes produce a relevant knowledge pool. A characterization of the information in this pool is then used to select a single partially ordered set of rhetorical techniques from the various possibilities.A formal representation of the answer (called a "message" ) is constructed by selecting propositions from the relevant knowledge pool which match the rhetorical techniques in the given set. The focusing mechanism monitors the matching process; where there are choices for what to say next (i.e.-either alternative techniques are possible or a single tec~mique matches several propositions in the knowledge pool), the focusing mechanism selects that proposition which ties in most closely with the previous discourse. Once the message has been constructed, the system passes the message to a tactical component [BOSSIE 81 ] which uses a functional grammar [KAY 79] to translate the message into English.
knowledge base:
Answering questions about the structure of the database requires access to a high-level description of the classes of objects ino the database, their properties, and the relationships between them. The knowledge base used for the TEXT system is a standard database model which draws primarily from representations developed by Chen [CHEN 76] course of an answer.The relevant knowledge pool is constructed by a fairly simple process. For requests for definitions or available information, the area around the questioned object containing the information immediately associated with the entity (e.g. its superordinates, sub-types, and attributes) is circumscribed and partitioned from the remainir~ knowledge base. For questions about tk~ difference between entities, the information included in the relevant knowledge pool depends on how close in the generalization hierarchy t~ two entities are. For entities that are very similar, detailed attributive information is included. For entities that are very different, only generic class information is included. A combination of this information is included for entities falling between t~se two extremes.(See [MCKEOWN 82] for further details).6.0 R~LETORICAL PREDICATES ~%etorical predicates are the means which a speaker has for describing information. ~hey characterize the different types of predicating acts s/he may use and delineate the structural relation between propositions in a text. some examples are "analogy" (comparison with a familiar object), "constituency" (description of sub-parts or sub-types), and "attributive" (associating properties with an entity or event).Linguistic discussion of such predicates (e.g. [GRIMES 75] , [SHEPHERD 26]) indicates that some combinations are preferable to others. Moreover, Grimes claims that predicates are recursive and can be used to identify the organization of text on any level (i.e.-proposition, sentence, paragraph, or longer sequence of text), alti~ugh he does not show how.I have examined texts and transcripts and have found that not only are certain combinations of rhetorical tec~miques more likely than others, certain ones are more appropriate in some discourse situations than others. For example, I found that objects were frequently defined by employing same combination of the following means: (i) identifying an item as a memDer of some generic class,describing an object's function, attributes, and constituency (either physical or class),(3) making analogies to familiar objects, and (4) providing examples. These techniques were rarely used in random order; for instance, it was common to identify an item as a member of some generic class before providing examples.In the TEXT system, these types of standard patterns of discourse structure have been captured in schemas associated with explicit discourse purposes.The schemas loosely identify normal patterns of usage. The~ are not intended to serve as grammars of text.The schema shown be-~ ~rves the purposes o~ providing definitions:Identification Schema identification (class&attribute/function) [analogy~constituency~attributive]* [particular-illustration~evidence]+ {amplification~analogy~attributive} {particular-illustration/evidence}Here, "{ ]" indicates optionality, "/" indicates alternatives, "+" indicates that the item may appear l-n times, and "*" indicates that the item may appear O-n times. The order of the predicates indicates that the normal pattern of definitions is an identifying pro~'~tion followed by any number of descriptive predicates.The speaker then provides one or more examples and can optionally close with some additional descriptive information and possibly another example.TEXT's response to the question "What is a ship?" (shown below) was generated using the identification schema. ~e sentences are numbered to show the correspondence between each sentence and the predicate it corresponds to in the instantiated schema (tile numbers do not occur in the actual output). TEXT'S response to the question "What do you know about vehicles?" was generated using the constituency schema. It is shown below along with the predicates that were instantiated for the answer. 2) qhere are 2-types of vehicl~s in the ONR data~]se: aircraft and water-going vehicles.3) The water-going vehicle has DB attributes that provide information on TRAVEL MEANS and WATER GOING OPERATION. 4) The ~ircraft has DB ° attributes --that provide information on TRAVEL MEANSf FLIGHT RADIUS, CEILING and ROLE. Other DB attributes -of the vehicle include FUEL( FUEL_CAP~EITY and FUEL_TYPE) and FLAG.Two other strategies were identified in the texts examined. These are encoded in the attributive schema, which is used to provide detailed information about a particular aspect of an entity, and the compar e and contrast schema, which encodes a strategy --~r contrasting two entities using a description of their similarities and their differences.For more detail on these strategies, see [MCKEGWN 82 ].
use of the schemas:
As noted earlier, an examination of texts revealed that different strategies were used in different situations.In TEXT, this association of technique with discourse purpose is achieved by associating the different schemas with different question-types.For example, if the question involves defining a term, a different set of schemas (and therefore rhetorical techniques) is chosen than if the question involves describing the type of information available in the database.The identification schema can be used in response to a request for a definition.The purpose of the attributive schema is to provide detailed information about one particular aspect of any concept and it can therefore be used in response to a request for information. In situations where an object or concept can be described in terms of its sub-parts or sub-classes, the constituency schema is used. It may be selected in response to requests for either definitions or information.The compare and contrast schema is used in response ~o a questl'i'~ about the difference between objects.A surmary of the assignment of schemas to question-types is shown in Figure 2 .Schemas used for TEXT i.3. Once a question has been posed to TEXT, a schema must be selected for the response structure which will then be used to control the decisions involved in deciding what to say when. On the basis of the given question, a set of schemas is selected as possible structures for the response. This set includes those sch~nas associated with the given question-type (see Figure 2 above) . A single schema is selected out of this set on the basis of the information available to answer the question.For example, in response to requests for definitions, the constituency schema is selected when the relevant knowledge pool contains a rich description of the questioned object's sub-classes and less information about the object itself. When this is not the case, the identification schema is used.The test for what kind of information is available is a relatively simple one. If the questioned object occurs at a higher level in the hierarchy than a pre-determined level, the constituency schema is used. Note that the higher an entity occurs in the hierarchy, the less descriptive information is available about the entity itself. More information is available about its sub-parts since fewer common features are associated with entities higher in the hierarchy.This type of semantic and structural interaction means that a different schema may be used for answering the same type of question. An earlier example showed that the identification schema was selected by the TEXT system in response to a request for a definition of a ship. In response to a request for a definition of a guided projectile (shown below), the constituency schema is selected since more information is available about the sub-classes of the guided projectile than about the guided projectile itself.Schema selected: Constituency i) identification 2) constituency 3) identification 4) identification 5) evidence 6) evidence 7) attributive I) A guided projectile is a projectile that is self-propelled. 2) There are 2 types of guided projectiles in the ONR database: torpedoes and missiles. 3) The missile has a target location in the air or on the earth's surface.4) The torpedo has an underwater target location.5 Once a schema has been selected, it is filled by matching the predicates it contains against the relevant knowledge pool. The semantics of each predicate define the type of information it can match in the knowledge pool.The semantics defined for TEXT are particular to the database query dumain and would have to be redefined if the schemas were to be used in another type of system (such as a tutorial system, for example).The semantics are not particular, however, to the domain of the database. When transferring the system from one database to another, the predicate semantics would not have to be altered.A proposition is an instantiated predicate; predicate arguments have been filled with values from the knowledge base. An instantiation of the identification predicate is shown below along with its eventual translation. The schema is filled by stepping through it, using the predicate s~nantics to select information which matches the predicate arguments. In places where alternative predicates occur in the schema, all alternatives are matched against the relevant knowledge pool producing a set of propositions. The focus constraints are used to select the most appropriate proposition.The schemas were implemented using a formalism similar to an augmented transition network (ATN). Taking an arc corresponds to the selection of a proposition for the answer. States correspond to filled stages of the schema. The main difference between the TEXT system implementation and a usual ATN, however, is in the control of alternatives.Instead of uncontrolled backtracking, TEXT uses one state lookahead. From a given state, it explores all possible next states and chooses among them using a function that encodes the focus constraints. This use of one state lookahead increases the efficiency of the strategic component since it eliminates unbounded non-determinism.
focusing mechanism:
So far, a speaker has been shown to be limited in many ways.For example, s/he is limited by the goal s/he is trying to achieve in the current speech act. TEXT's goal is to answer the user's current question.To achieve that goal, the speaker has limited his/her scope of attention to a set of objects relevant to this goal, as represented by global focus or the relevant knowledge pool.The speaker is also limited by his/her higher-level plan of how to achieve the goal.In TEXT, this plan is the chosen schema. Within these constraints, however, a speaker may still run into the problem of deciding what to say next.A focusing mechanism is used to provide further constraints on what can be said. The focus constraints used in TEXT are immediate, since they use the most recent proposition (corresponding to a sentence in the ~glish answer) to constrain the next utterance. Thus, as the text is constructed, it is used to constrain what can be said next.Sidner [SIDNER 79 ] used three pieces of information for tracking immediate focus: the immediate focus of a sentence (represented by the current focus -CF), the elements of a sentence ~---I~hare potential candidates for a change in focus (represented by a potential focus list -PFL), and past immediate focY [re--pr--esent--'-~--6y a focus stack).She showed that a speaker has the 3~6~win-g'~tions from one sentence to the next: i) to continue focusing on the same thing, 2) to focus on one of the items introduced in the last sentence, 3) to return to a previous topic in ~lich case the focus stack is popped, or 4) to focus on an item implicitly related to any of these three options. Sidner's work on focusing concerned the inter~[e__tation of anaphora. She says nothing about which of these four options is preferred over others since in interpretation the choice has already been made.For generation, ~.~ver, a speaker may have to choose between these options at any point, given all that s/he wants to say. The speaker may be faced with the following choices: i) continuing to talk about the same thing (current-focus equals current-focus of the previous sentence) or starting to talk about something introduced in the last sentence (current-focus is a member of potential-focus-list of the previous sentence) and 2) continuing to talk about the same thing (current focus remains the same) or returning to a topic of previous discussion (current focus is a member of the focus-stack).When faced with the choice of remaining on the same topic or switching to one just introduced, I claim a speaker's preference is to switch. If the speaker has sanething to say about an item just introduced and does not present it next, s/he must go to the trouble of re-introducing it later on. If s/he does present information about the new item first, however, s/he can easily continue where s/he left off by following Sidner's legal option #3. ~qus, for reasons of efficiency, the speaker should shift focus to talk about an item just introduced when s/he has something to say about it.When faced with the choice of continuing to talk about the same thing or returning to a previous topic of conversation, I claim a speaker's preference is to remain on the same topic. Having at some point shifted focus to the current focus, the speaker has opened a topic for conversation. By shifting back to the earlier focus, the speaker closes this new topic, implying that s/he has nothing more to say about it when in fact, s/he does.Therefore, the speaker should maintain the current focus when possible in order to avoid false implication of a finished topic.These two guidelines for changing and maintaining focus during the process of generating language provide an ordering on the three basic legal focus moves that Sidner specifies: I.3. change focus to member of previous potential focus list if possible -CF (new sentence) is a member of PFL (last sentence) maintain focus if possible -CF (new sentence) = CF (last sentence) return to topic of previous discussion -CF (new sentence) is a member of focus-stack I have not investigated the problem of incorporating focus moves to items implicitly associated with either current loci, potential focus list members, or previous foci into this scheme. This remains a topic for future research.Even these guidelines, however, do not appear to be enough to ensure a connected discourse. Although a speaker may decide to focus on a specific entity, s/he may want to convey information about several properties of that entity.S/he will describe related properties of the entity before describing other properties.Thus, strands of semantic connectivity will occur at more than one level of the discourse.An example of this phenomenon is given in dialogues (A) and (B) below. In both, the discourse is focusing on a single entity (the balloon), but in (A) properties that must be talked about are presented randomly.In (B), a related set of properties (color) is discussed before the next set (size). (B), as a result, is more connected than (A).(A) The balloon was red and white striped. Because this balloon was designed to carry men, it had to be large.It had a silver circle at the top to reflect heat. In fact, it was larger than any balloon John had ever seen.(B) The balloon was red and white striped. It had a silver circle at the top to reflect heat. Because this balloon was designed to carry men, it had to be large. In fact, it was larger than any balloon John had ever seen.In the generation process, this phenomenon is accounted for by further constraining the choice of what to talk about next to the proposition with the greatest number of links to the potential focus list.TEXT uses the legal focus moves identified by Sidner by only matching schema predicates against propositions which have an argument that can be focused in satisfaction of the legal options. Thus, the matching process itself is constrained by the focus mechanism.The focus preferences developed for generation are used to select between remaining options. These options occur in TEXT when a predicate matches more than one piece of information in the relevant knowledge pool or when more ~,an one alternative in a schema can be satisfied. In such cases, the focus guidelines are used to select the most appropriate proposition. When options exist, all propositions are selected which have as focused argument a member of the previous PFL. If none exist, then whose focused current-focus. propositions are is a member of filtering steps possibilities to proposition with all propositions are selected argument is the previous If none exist, then all selected whose focused argument the focus-stack.If these do not narrow down the a single proposition, that the greatest number of links to the previous PFL is selected for the answer. Tne focus and potential focus list of each proposition is maintained and passed to the tactical component for use in selecting syntactic constructions and pronominalization.Interaction of the focus constraints with the schemas means that although the same schema may be selected for different answers, it can be instantiated" in different ways. Recall that the identification schema was selected in response to the question "What is a ship?" and the four predicates, identification, evidence, attributive, and ~articular-illustrati0n, were instantiated. Tne identification schema was also selected in response to the question "What is an aircraft carrier?", but different predicates were instantiated as a result of the focus constraints:(definition AIRCRAFT-CARRIER) Schema selected: identification I) identification 2) analogy 3) particular-illustration 4) amplification 5) evidence i) An aircraft carrier is a surface ship with a DISPLACEMENT between 78000 and 80800 and a LENGTH between 1039 and 1063. 2) Aircraft carriers have a greater LENGTH than all other ships and a " greater DISPLACEMENT than most other ships. 3) Mine warfare ships, for example, have a DISPLACF24ENT of 320 and a LENGTH of 144. 4) All aircraft carriers in the ONR database have REMARKS of 0, FUEL TYPE of BNKR, FLAG of BLBL, BEAM of 252, ENDU--I~NCE RANGE of 4000, ECONOMIC SPEED of 12, ENDURANCE SPEED of 30 and PRO~LSION of STMTURGRD. 5)--A ship is classified as an aircraft carrier if the characters 1 through 2 of its HULL NO are CV.
future directions:
Several possibilities for further development of the research described here include i) the use of the same strategies for responding to questions about attributes, events, and relations as well as to questions about entities, 2) investigation of strategies needed for responding to questions about the system processes (e.g. How is manufacturer ' s cost determined?) or system capabilities (e.g.Can you handle ellipsis?) , 3) responding to presuppositional failure as well as to direct questions, and 4) the incorporation of a user model in the generation process (currently TEXT assumes a static casual, naive user and gears its responses to this characterization). Tnis last feature could be used, among other ways, in determining the amount of detail required (see [ MCKEOWN 82 ] for discussion of the recursive use of the sch~nas).
conclusion:
The TEXT system successfully incorporates principles of relevancy criteria, discourse structure, and focus constraints into a method for generating English text of paragraph length. Previous work on focus of attention has been extended for the task of generation to provide constraints on what to say next. Knowledge about discourse structure has been encoded into schemas that are used to guide the generation process.The use of these two interacting mechanisms constitutes a departure from earlier generation systems. The approach taken in this research is that the generation process should not simply trace the knowledge representation to produce text.Instead, communicative strategies people are familiar with are used to effectively convey information. This means that the same information may be described in different ways on different occasions.The result is a system which constructs and orders a message in response to a given question. Although the system was designed to generate answers to questions about database structure (a feature lacking in most natural language database systems), the same techniques and principles could be used in other application areas (for example, computer assisted instruction systems, expert systems, etc.) where generation of language is needed. ~owl~~ I would like to thank Aravind Joshi, Bonnie Webber, Kathleen McCoy, and Eric Mays for their invaluable comments on the style and content of this paper.Thanks also goes to Kathleen Mccoy and Steven Bossie for their roles in implementing portions of the sys~om.[MALHOTRA 75]. Malhotra, A. "Design criteria for a knowledge-based English language system for management: an experimental analysis." MAC TR-146, MIT, Cambridge, Mass. (1975) .[ [TENNANT 79]. Tennant, H., "Experience with the evaluation of natural language question answerers." Working paper #18, Univ. of Illinois, Urbana-Champaign, Ill. (1979) .
Appendix:
| null | null | null | null | {
"paperhash": [
"mccoy|augmenting_a_database_knowledge_representation_for_natural_language_generation",
"mckeown|generating_relevant_explanations:_natural_language_responses_to_questions_about_database_structure",
"tennant|experience_with_the_evaluation_of_natural_language_question_answerers",
"sidner|towards_a_computational_theory_of_definite_anaphora_comprehension_in_english_discourse",
"allen|a_functional_grammar",
"mckeown|generating_natural_language_text_in_response_to_questions_about_database_structure",
"grosz|the_representation_and_use_of_focus_in_dialogue_understanding.",
"chen|the_entity-relationship_model:_towards_a_unified_view_of_data",
"malhotra|design_criteria_for_a_knowledge-based_english_language_system_for_management_:_an_experimental_analysis"
],
"title": [
"Augmenting a Database Knowledge Representation for Natural Language Generation",
"Generating Relevant Explanations: Natural Language Responses to Questions about Database Structure",
"Experience with the Evaluation of Natural Language Question Answerers",
"Towards a computational theory of definite anaphora comprehension in English discourse",
"A Functional Grammar",
"Generating natural language text in response to questions about database structure",
"The representation and use of focus in dialogue understanding.",
"The Entity-Relationship Model: Towards a unified view of Data",
"Design criteria for a knowledge-based English language system for management : an experimental analysis"
],
"abstract": [
"The knowledge representation is an important factor in natural language generation since it limits the semantic capabilities of the generation system. This paper identifies several information types in a knowledge representation that can be used to generate meaningful responses to questions about database structure. Creating such a knowledge representation, however, is a long and tedious process. A system is presented which uses the contents of the database to form part of this knowledge representation automatically. It employs three types of world knowledge axioms to ensure that the representation formed is meaningful and contains salient information.",
"The research described here is aimed at unresolved problems in both natural language generation and natural language interfaces to database systems. How relevant information is selected and then organized for the generation of responses to questions about database structure is examined. Due to limited space, this paper reports on only one method of explanation, called \"compare and contrast\". In particular, it describes a specific constraint on relevancy and organization that can be used for this response type.",
"Research in natural language processing could be facilitated by thorough and critical evaluations of natural language systems. Two measurements, conceptual and linguistic completeness, are defined and discussed in this paper. Testing done on two natural language question answerers demonstrated that the conceptual coverage of such systems should be extended to better satisfy the needs and expectations of users.",
"Abstract : This report investigates the process of focussing as a description and explanation of the comprehension of certain anaphoric expressions in English discourse. The investigation centers on the interpretation of definite anaphora, that is, on the personal pronouns, and noun phrases used with a definite article the, this, or that. Focussing is formalized as a process in which a speaker centers attention on a particular aspect of the discourse. An algorithmic description specifies what the speaker can focus on and how the speaker may change the focus of the discourse as the discourse unfolds. The algorithm allows for a simple focussing mechanism to be constructed: an element in focus, an ordered collection of alternate foci, and a stack of old foci. The data structure for the element in focus is a representation which encodes a limited set of associations between it and other elements from the discourse as well as from general knowledge. This report also establishes other constraints which are needed for the successful comprehension of anaphoric expressions. The focussing mechanism is designed to take advantage of syntactic and semantic information encoded as constraints on the choice of anaphora interpretation. These constraints are due to the work of language researchers; and the focussing mechanism provides a principled means for choosing when to apply the constraints in the comprehension process.",
"Functional Grammar describes grammar in functional terms in which a language is interpreted as a system of meanings. The language system consists of three macro-functions known as meta-functional components: the interpersonal function, the ideational function, and the textual function, all of which make a contribution to the structure of a text. The concepts discussed in Functional Grammar aims at giving contribution to the understanding of a text and evaluation of a text, which can be applied for text analysis. Using the concepts in Functional Grammar, English teachers may help the students learn how various grammatical features and grammatical systems are used in written texts so that they can read and write better.",
"There are two major aspects of computer-based text generation: (1) determining the content and textual shape of what is to be said; and (2) transforming that message into natural language. Emphasis in this research has been on a computational solution to the questions of what to say and how to organize it effectively. A generation method was developed and implemented in a system called TEXT that uses principles of discourse structure, discourse coherency, and relevancy criterion. \nThe main features of the generation method developed for the TEXT strategic component include (1) selection of relevant information for the answer, (2) the pairing of rhetorical techniques for communication (such as analogy) with discourse purposes (for example, providing definitions) and (3) a focusing mechanism. Rhetorical techniques, which encode aspects of discourse structure, are used to guide the selection of propositions from a relevant knowledge pool. The focusing mechanism aids in the organization of the message by constraining the selection of information to be talked about next to that which ties in with the previous discourse in an appropriate way. \nThis work on generation has been done within the framework of a natural language interface to a database system. The implemented system generates responses of paragraph length to questions about database structure. Three classes of questions have been considered: questions about information available in the database, requests for definitions, and questions about the differences between database entities. \nThe main theoretical results of this research have been on the effect of discourse structure and focus constraints on the generation process. A computational treatment of rhetorical devices has been developed which is used to guide the generation process. Previous work on focus of attention has been extended for the task of generation to provide constraints on what to say next. The use of these two interacting mechanisms constitutes a departure from earlier generation systems. The approach taken in this research is that the generation process should not simply trace the knowledge representation to produce text. Instead, communicative strategies people are familiar with are used to effectively convey information. This means that the same information may be described in different ways on different occasions.",
"Abstract : This report develops a representation of focus of attention thatcircumscribes discourse contexts within a general representation ofknowledge. Focus of attention is essential to any comprehension processbecause what and how a person understands is strongly influenced bywhere his attention is directed at a given moment. To formalize thenotion of focus, the need for and the use of focus mechanisms areconsidered from the standpoint of building a computer system that canparticipate in a natural language dialogue with a ser, Two ranges offocus, global and immediate, are investigated, and representations forincorporating them in a computer system are developed.The global focus in which an utterance is interpreted is determinedby the total discourse and situational setting of the utterance. Itinfluences what is talked about, how different concepts are introduced,and how concepts are referenced. To encode global focuscomputationally, a representation is developed that highlights thoseitems that are relevant at a given place in a dialogue. The underlyingknowledge representation is segmented into subunits, called focusspaces, that contain those items that are in the focus of attention of adialogue participant during a particular part of the dialogue.Mechanisms are required for updating the focus representation,because, as a dialogue progresses, the objects and actions that arerelevant to the conversation, and therefore in the participants' focusof attention, change. Procedures are described for deciding when andhow to shift focus in task-oriented dialogues, i.e., in dialogues inwhich the participants are cooperating in a shared task. Theseprocedures are guided by a representation of the task being performed.The ability to represent focus of attention in a languageunderstanding system results in a new approach to an important problemin discourse comprehension -- the identification of the referents ofdefinite noun phrases.",
"An improved method of operation is provided for a catalytic, low pressure process for continuously reforming a hydrocarbon charge stock boiling in the gasoline range in order to produce a high octane effluent stream in which process the hydrocarbon charge stock and hydrogen are continuously contacted in a reforming zone with a reforming catalyst containing a catalytically effective amount of a platinum group metal at reforming conditions including a pressure of 25 to 250 psig. The improved method of operation involves continuously adding a refractory light hydrocarbon to the reforming zone in an amount sufficient to result in a mole ratio of refractory light hydrocarbon to hydrogen entering the reforming zone of about 0.4:1 to about 10:1. Moreover, the refractory light hydrocarbon addition is commenced at start-up of the process and continued throughout the duration of the reforming run. The principal advantage associated with this improved method of operation is increased stability of the reforming catalyst and particularly, increased temperature stability at octane.",
"Thesis (Ph. D.)--Massachusetts Institute of Technology, Alfred P. Sloan School of Management, 1975."
],
"authors": [
{
"name": [
"Kathleen F. McCoy"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"K. McKeown"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"H. Tennant"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"C. Sidner"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"H. B. Allen",
"M. Bryant"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"K. McKeown"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"B. Grosz"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"Peter P. Chen"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"A. Malhotra"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
}
]
}
],
"arxiv_id": [
null,
null,
null,
null,
null,
null,
null,
null,
null
],
"s2_corpus_id": [
"10166824",
"29383798",
"31711072",
"41092026",
"150098969",
"62743223",
"61114426",
"260927278",
"60706096"
],
"intents": [
[
"methodology"
],
[],
[],
[
"methodology"
],
[],
[
"background"
],
[],
[],
[]
],
"isInfluential": [
false,
false,
false,
false,
false,
false,
false,
false,
false
]
} | - Problem: The paper addresses the challenges of computer-based generation of natural language, specifically focusing on determining the content and textual shape of the message to be conveyed and transforming it into English.
- Solution: The paper proposes a computational solution that involves an interaction between structural and semantic processes, utilizing schemas to guide the generation process and a focusing mechanism to maintain discourse coherency. | 512 | 0.058594 | null | null | null | null | null | null | null | null |
e5bd62c197c6bacbe071eff5fb26849a508f10da | 10166824 | null | Augmenting a Database Knowledge Representation for Natural Language Generation | these attributes are not necessarily attributes contained in the database. | {
"name": [
"McCoy, Kathleen F."
],
"affiliation": [
null
]
} | null | null | 20th Annual Meeting of the Association for Computational Linguistics | 1982-06-01 | 9 | 20 | null | The knowledge representation is an important factor in natural language generation since it limits the semantic capabilities of the generation system. This paper identifies several information types in a knowledge representation that can be used to generate meaningful responses to questions about database structure.Creating such a knowledge representation, however, is a long and tedious process. A system is presented which uses the contents of the database to form part of this knowledge representation automatically. It employs three types of world knowledge axioms to ensure that the representation formed is meaningful and contains salient information.representation reflects both the database contents and the database designer's view of the world.One important class of questions involves comparing database entities. The system's knowledge representation must therefore contain meaningful information that can be used to make comparisons (analogies) between various entity classes. This paper focuses specifically on those aspects of the knowledge representation generated by ENHANCEwhich facilitate the use of analogies. An overview of the knowledge representation used by TEXT is first given. This is followed by a discussion of how part of this representation is automatically created by ENHANCE.In order for a user to extract meaningful information from a database system, s/he must first understand the system's view of the world what information the system contains and what that information represents. An optimal way of acquiring this knowledge is to interact, in natural language, with the system itself, posing questions to it about the structure of its contents.The TEXT system [McKeown 82 ] was developed to faci~te this type of interaction.In order to make use of the TEXT system, a system's knowledge about itself must be rich enough to support the generation of interesting texts about the structure of its contents. As I will demonstrate, standard database models [Chen 76] , [Smith & Smith 77] are not sufficient to support this type of generation. Moreover, since time is such an important factor when generating answers, and extensive inferencing is therefore not practical, the system's self knowledge must be i~ediately available in its knowledge representation.Tne ENHANCE system, described here, has been developed to augment a database schema with the kind of information necessary for generating informative answers to users' queries.The ENHANCE system creates part of the knowledge representation used by TEXT based on the contents of the database. A set of world knowledge axioms are used to ensure that this knowledge ~rk was partially supported by National Science 5oundatlon grant #MCS81-07290. | null | In order for the generation system to give meaningful descriptions of the database, the knowledge representation must effectively capture both a typical user's view of the domain and how that domain has been modelled within the system. Without real world knowledge indicating what a user finds meaningful, there are several ways in which an automatically generated taxonomy may deviate from how a user views the domain:(I) the representation may fail %o capture the user's preconceived notions of how a certain database * The sentences are numbered here to simplify the discussion:there are no sentence n~nbers in the actual material produced by TEXT. entity class should be partitioned into sub-classes;(2) the system may partition an entity class on the basis of a non-salient attribute leading to an inappropriate breakdown;(3) non-salient information may be chosen to describe the sub-classes leading to inappropriate descriptions;(4) a breakdown may fail to add meaning to the representation (e.g. a partition chosen may simply duplicate information already available).qhe first case will occur if the sub-types of these breakdowns are not completely reflected in the database attribute names and values. For example, even though the partition of SHIP into its various types (e.g. Aircraft-Carrier, Destroyer, etc.) is very common, there may be no attribute SHIP TYPE in the database to form this partition. Th~ partition can be derived, however, if a semantic mapping between the sub-type names and existing attribute-value pairs can be identified. In this case, the partition can be derived by associating the first few characters of attribute HULL NO with the various ship-types. The ~ s~:~ific axioms are provided as a means for defl-ning such mappings.The taxonomy may also deviate from what a user might expect if the system partitions an entity class on the basis of non-salient attributes.It seems very natural to have a breakdown of SHIP based on attribute CLASS, but one based on attribute FUEL-CAPACITY would seem less appropriate.A partition based on CLASS would yield sub-classes of SHIP such as SKORY and KITFY-HAWK, while one on FUEL CAPACITY could only yield ones like SHI PS-4~q~H-10 0-FUEL-CAPAC ITY. Since saliency is not an intrinsic property of an attribute, there must be a way of indicating attributes salient in the domain.The specific axioms are provided for this purpose.The user's view of the domain will not be captured if the information chosen to describe the sub-classes is not chosen from attributes important to the domain. Saliency is crucial in choosing the descriptive information (particularly the DDAS) for the sub-classes.Even though a DESTROYER may be differentiated from other types of ships by its ECONOMIC-SPEED, it seems more informative to distinguish it in terms of the more commonly mentioned property DISPLACEMENT. Here again, this saliency information is provided by the specific axioms.A final problem faced by a system which only relies on the database contents is that a partition formed may be essentially meaningless (adding no new information to the representation). This will occur if all of the instances in the database fall into the same sub-cl~ss or if each falls into a different one.Such breakdowns either exactly reflect the entity class as a whole, or reflect the individual instances. This same type of problem occurs if the only difference between two sub-classes is the attribute the breakdown is based on.Thus, no trend can be found among the other attributes within the sub-classes formed. Such a breakdown would add no information that could not be trivially derived from the database itself. These types of breakdowns are "filtered out" using the @eneral ax{oms.The world knowledge axioms guide ENHANCE to ensure that the breakdowns formed are appropriate and that salient information is chosen for the sub-class descriptions.At the same time, the axioms give the designer control over the representation formed. The axioms can be changed and the system rerun. The new representation will reflect the new set of world knowledg e axioms. In this way, the database designer can tune the representation to his/her needs. Each axiom category, how they are used by ENHANCE, and the problems each category solves are discussed below.The very specific axioms give the user the most control over the representation formed. They let the user specify breakdowns that s/he would a priori like to appear in the knowledge representation. The axioms are formulated in such a way as to allow breakdowns On parts of the value field of a character attribute, and on ranges of values for a numeric attribute (examples of each are given below). This type of breakdown could not be formed without explicit information indicating the defining portions of the attribute value field and their associated semantic values.A sample use of the very specific axioms can be found in classifying ships by their type (ie. Aircraft-carriers, Destroyers, Mine-warfare-ships, etc...), qhis is a very common breakdown of ships. Assume there is no database attribute which explicitly gives the ship type. With no additional information, there is no way of generating that breakdown for ship. A user knowledgeable of the domain would note that there is a way to derive the type of a ship based on its HULL NO. In fact, the first one or two characters of [he HULL NO uniquely identifies the ship type. ~Dr example,--all AIRCRAFT-CARRIERS have a HULL NO whose first two characters are CV, while the fi?st two characters of the HULL NO of a CRUISER are CA or CG or CL. This information can be captured in a very specific axiom which maps part of a character attribute field into the sub-type names. An example of such an axiom is shown in Figure i. (SHIP "SHIP HULL NO" "OTHER-SH IP-TYPE" (I 2 "C~' "AIRCRAFT-CARRIER") (i 2 "CA" "CRUISER") (I 2 "CG" "CRUISER") (i 2 "CL" "CRUISER") (i 2 "DD" "DESTROYER") (i 2 "DL" "FRIGATE") (I 2 "DE" "OCEAN-ESCORT") (i 2 "PC" "PATROL-SHIP-AND-CRAFT") (i 2 "PG" "PATROL-SHIP-AND-CRAFT") (i 2 "PT" "PATROL-SHIP-AND-CRAFT") (i 1 "L" "AMPHIBIOUS-AND-LANDING-SHIP") (i 2 "MC" ,MINE-WARFARE-SHIP") (I 2 "MS" "MINE-WARFARE-SHIP") (i 1 "A" "AUXILIARY-SHIP")) Figure I . Very Specific (Character) AxiomSub-typing of entities may also be specified based on the ranges of values of a numeric attribute. For example, the entity BCMB is often sub-typed by the range of the attribute BOMB WEIGHT. A BOMB is classified as being HEAVY if i~s weight is above 900, MEDIUM-WEIGHT if it is between 100 and 899, and LIGHT-WEIGHT if its weight is less than i00. An axiom which specifies this is shown in FIGURE 2.(BOMB "BCMB WEIGHT""OTHER-WEIGHT-BOMB" (900 99999 "HEAVY-BOMB") (i00 899 "MEDIUM-WEIGHT-BOMB" ) (0 99 "LIGHT-WEIGHT-BOMB") ) Figure 2 . Very Specific (Numeric) AxiomFormation of the very specific axioms requires in-depth knowledge of both the domain the database reflects, and the database itself. Knowledge of the domain is required in order to make common classifications (breakdowns) of objects in the domain. Knowledge of the database structure is needed in order to convey these breakdowns in terms of the database attributes. It should be noted that this type of axiom is not required for the system to run. If the user has no preconceived breakdowns which should appear in the representation, no very specific axioms need to be specified.The specific axioms afford the user less control than the very specific axioms, but are still a powerful device.The specific axioms point out which database attributes are more important in the domain than others. They consist of a single list of database attributes called the im~ortant attributes list. The important at£ributes list does not "control" the system as the very specific axioms do. Instead it suggests paths for the system to try; it has no binding effects. The important attributes list used for testing ENHANCE on the ONR database is shown in Figure 3 . Figure 3 . Important Attributes List ENHANCE has two major uses for the important attributes list: (i) It attempts to form breakdowns based on some of the attributes in the list.(2) It uses the list to decide which attributes to use as DDAs for a sub-class. ENHANCE must decide which attributes are better as the basis for a breakdown and which are better for describing the resulting sub-classes. While most attributes important to the domain are good for descriptive purposes, character attributes are better than others as the basis for a breakdown. Attributes with character values can more naturally be the basis for a breakdown since they have a small set of legal values. A breakdown based on such an attribute leads to a small well-defined set of sub-classes. Nt~meric attributes, on the other hand, often have an infinite number of legal values.A breakdown based on individual numeric values could lead to a potentially infinite number of sub-classes. This distinction between numeric and character (symbolic) attributes is also used in the TEAM system [Grosz et. al. 82] . ENHANCE first attempts to form breakdowns of an entity based on character attributes from the important attributes list.Only if no breakdowns result from these attempts, does the system attempt breakdowns based on numeric attributes.The important attributes list also plays a major role in selecting the distinguishing descriptive attributes (DDAs) for a particular sub-class.Recall that the DDAs are a set of attributes whose values differentiate one sub-class from all other sub-classes in the same breakdown. It is often the case that several sets of attributes could serve this purpose. In this situation, the important attributes list is consulted in order to choose the most salient distinguishing features. The set of attributes with the highest number of attributes on the important attributes list is chosen.The important attributes list affords the user less control over the representation formed than the very specific axioms since it only suggests paths for the system to take. The system attempts to form breakdowns based on the attributes in the list, but these breakdowns are subjected to tests encoded in the general axioms which are not used for breakdowns formed by the very specific axioms. Breakdowns formed using the very specific axioms are not subjected to as many tests since they were explicitly specified by the database designer.The final type of world knowledge axioms used by ENHANCE are the general axioms. These axioms are domain independent and need not be changed by the user. They encode general principles used for deciding such things as whether sub-classes formed should be added to the knowledge representation, and how sub-classes should be named.The ENHANCE system must be capable of naming the sub-classes. The name must uniquely identify a sub-class and should give some semantic indication of the contents of the sub-class. At the same time, they should sound reasonable to the ~HANCE user.These problems are handled by the general axioms entitled naming conventions. An example of a naming convention is:Rule 1 -The name of a sub-class of entity ENT formed using a character* attribute with value VAL will be: VAL-ENT.Examples of sub-classes named using this rule include: WHISKY-SUBMARINE and FORRESTAL-SHIP.The ENHANCE system must also ensure that each of the sub-classes in a particular breakdown are meaningful. For instance, some of the sub-classes may contain only one individual from the database. If several such sub-classes occur, they are combined to form a CLASS-OTHER sub-class. This use of CLASS-OTHER compacts the representation while indicating that a number of instances are not similar enough to any others to form a sub-class. The DDA for CLASS-OTHER indicates what attributes are common to all entity instances that fail to make the criteria for membership in any of the larger named sub-classes. Without CLASS-OTHER this information would have to be derived by the generation system; this is a potentially time consuming process.The general axioms contain several rules which will block the formation of "CLASS-OTHER" in circumstances where it will not add information to the representation. These * This is a slight simplification of the rule actually used by EN}~NCE, see [McCoy 82 ] for further details.Rule 2 -Do not form CLASS-(TfHER if it will contain only one individual.Rule 3 -Do not form CLASS-OTHER if it will be the only child of a superordinate.Perhaps the most important use of the general axioms is their role in deciding if an entire breakdown adds meaning to the knowledge representation.The general axioms are used to "filter out" breakdowns whose sub-classes either reflect the entity class as a whole, Or the actual instances in the database.They also contain rules for handling cases when no differences between the sub-classes can be found. Examples of these rules include:Rule 4 -If a breakdown results in the formation of only one sub-type, then do not use that breakdown.Rule 5 -If every sub-class in two different breakdowns contains exactly the same individuals, then use only one of the breakdowns.The ENHANCE system consists of ~ set of independent modules; each is responsible for generating some piece of descriptive information for the sub-classes. When the system is invoked for a particular entity class, it first generates a number of breakdowns based on the values in the database. These breakdowns are passed from one module to the next and descriptive information is generated for each sub-class involved. This process is overseen by the general axioms which may throw out breakdowns for which descriptive information can not be generated.Before generating the breakdowns from the values in the database, the constraints on the values are checked and all units are converted to a common value. Any attribute values that fail to meet the constraints are noted in the representation and not used in the calculation. From these values a number of breakdowns are generatc~d using the very specific and specific axioms.The breakdowns are first passed to the "fitting algoritl~n". ~en two or more breakdowns are generated for an entity-class, the sub-classes in one breakdown may be contained in the sub-classes of the other.In this case, the sub-classes in the first breakdown should appear as the children of the sub-classes of the second breakdown, adding depth to tl~ hierarchy. ~e fitting algorit|un is used to calculate where the sub-classes fit in the generalization hierarchy. After the fitting algoritt~ is run, the general axioms may intervene to throw out any breakdowns which are essentially duplicates of other breakdowns (see rule 5 above).At this point, the DDAs of the sub-classes within each breakdown are calculated. The algorithm used in this calculation is described below to illustrate the combinatoric nature of the augmentation process. If no DDAs can be found for a breakdown formed using the important attributes list, the general axioms may again intervene to throw out that breakdown.Flow of control then passes through a number of modules responsible for calculating the based DB attribute and for recording constant DB attributes and relation attributes. The actual nodes are then generated and added to the hierarchy.Generating the descriptive information for the sub-classes involves combinatoric problems which depend on the number of records for each entity in the database and the number of sub-classes formed for these entities. The ENHANCE system was implemented on a VAX 11/780, and was tested using a portion of an ONR database containing 157 records.It generated sub-type information for 7 entities and ran in approximately 159157 CPU seconds. For a database with many more records, the processing time may grow exponentially. This is not a major problem since the system is not interactive;it can be run in batch mode. In addition, it is run only once for a particular database. After it is run, the resulting representation can be used by the interactive generation system on all subsequent queries.A brief outline of the processing involved in generating the DDAs of a particular sub-class will be given. This process illustrates the kind of combinatoric problems encountered in automatic generation of sub-type information making it unreasonable computation for an interactive generation system.The Distinguishing Descriptive Attributes (DDAs) of a sub-class is a set of attributes, other than the based DB attribute, whose collective value differentiates that sub-class from all other sub-classes in the same breakdown. Finding the DDA of a sub-class is a problem which is ccmbinatoric in nature since it may require looking at all combinations of the attributes of the entity class.This problem is accentuated since it has been found that in practice, a set of attributes which differentiates one sub-class from all other sub-classes in the same breakdown does not always exist.Unless this problem is identified ahead of time, the system would examine all combinations of all of the attributes before deciding the sub-class can not be distinguished.There are several features of the set of DDAs which are desirable.(i) The set should be as s,~all as possible. (2) It should be made up of salient attributes (where possible).(3) The set should add information about that sub-class not already derivable from the representation. In other words, they should be different from the DDAS of the parent.A method for generating the DDAs could involve simply generating all 1-combinations of attributes, followed by 2-combinations etc.. until a set of attributes is found which differentiates the sub-class.Attributes that appeared in the DDA of the immediate parent sub-class would not be included in the combinations formed.To ensure that the DDA was made up of the most salient attributes, combinations of attributes from the important attributes list could be generated first. This method, however, does not avoid any of the combinatoric problems involved in the processing.To avoid some of these problems, a pre-processor to the combination stage of the calculation was developed. The combinations are formed of only potential-DDAs. These are a set of attributes whose value -can be used to differentiate the sub-class from at least one other sub-class.The attributes included in potential-DDAs take on a value within the sub-class that is different from the value the attributes take on in at least one other sub-class. Using the potential-DDAs ensures that each attribute in a given combination is useful in distinguishing the sub-class from all others.Calculating the potential-DDAs requires comparing the values of the attributes within the sub-class with the values within each other sub-class in turn.This calculation yields two other pieces of important information. If for a particular sub-class this comparison yields only one attribute, then this attribute is the only means for differentiating that sub-class from the sub-class the DDAs are being calculated for.In order for the DDA to differentiate the sub-class from all others, it must contain that attribute. Attributes of this type are called definite-DDAs. The second type of information identified has to do with when the sub-class can not be differentiated from all others. The comparing of attribute values of sub-classes makes immediately apparent when the DDA for a sub-class can not be found.In this case, the general axioms would rule out the breakdown containing that sub-class.* Assuming that the sub-class is found to be distinguishable, the system uses the potential-DDAs and the definite-DDAs to find the smallest and most salient set of attributes to use as the DDA. It forms combination of attributes using the definite-DDAs and me~rs of the potential-DDAs. The important attributes list is consulted to ensure that the most salient attributes are chosen as the DDA.There is a time/space tradeoff in using a * There are several cases in which ENHANCE would not rule out the breakdown, see [McCoy 82 ] for details. system like ENHANCE. Once the ~CE system is run, the generation system is relieved from the time consuming task of sub-type inferencing. ~his means, however, that a much larger knowledge representation for the generation system's use results. Since the generation system must be concerned with the amount of time it takes to answer a question, the cost of the larger knowledge representation is well worth the savings in inferencing time. If, however, at some future point, time is no longer a major factor in natural language generation, many of the ideas put forth here could be used to generate the sub-type information only as it is needed. | The TEXT system answers three types of questions about database structure:(i) requests for the definition of an entity;(2) requests for the information available about an entity; (3) requests concerning the difference between entities.It was implemented and tested using a portion of an 0NR database which contained information about vehicles and destructive devices.TEXT needs several types of information to answer the above questions. Some of this can be provided by features found in a variety of standard database models [Chen 76], [Smith & Smith 77] , [Lee & Gerritsen 78] .Of these, TEXT uses a generalization hierarch Z on the entities in order to define or identify them in terms of (I) their constituents (e.g. "There are two types of entities in the ONR database: destructive devices and vehicles."*) (2) their superordinates (e.g. "A destroyer is a surface ship ..A bomb is a free falling projectile." and "A whiskey is an underwater submarine ...").Each node in the hierarchy contains additional descriptive information based on standard features which is used to identify the database information associated with each entity and to indicate the distinguishing features of the entities.* The quoted material is excerpted from actual output from TEXT.One type of comparison that TEXT must ger~erate has to do with indicating why a particular individual falls into one entity sub-class as opposed to another. For example, "A ship is classified as an ocean escort if the characters 1 through 2 of its HULL NO are DE ... A ship is classified as a cruis--er if the characters 1 through 2 of its HULL NO are CG." and "A submarine is classified as an e~ho II if its CLASS is ECHO II." In order to generate this kind of comparison, TEXT must have available database information indicating the reason for a split in the generalization hierarchy. This information is provided in the based DB attribute.In comparing two entities, TEXT must be able to identify the major differences between them. Part of this difference is indicated by the descriptive distinguishing features of the entities. For example, "The missile has a target location in the air or on the earth's surface ... The torpedo has an underwater target location." and "A whiskey is an underwater submarine with a PROPULSION TYPE of DIESEl and a FLAG of RDOR." These dist'inguishing features consist of a number of attribute-value* pairs associated with each entity.They are provided in an information type termed the distinguishing descriptive attributes (DDAs) of an entity.In order for TEXT to answer questions about the information available about an entity, it must have access to the actual database information associated with each entity in the generalization hierarchy. This information is provided in what are termed the actual DB attributes (and constant values) and the r ela'~i6nal atEr ibutes (and values) .This informa£ioh -is also useful in comparing the attributes and relations associated with various entities.For example, "Other DB attributes of the missile include PROBABILITY OF KILL, SPEED, ALTI~DE ... Other DB attributes -of-the torpedo include FUSE TYPE, MAXIMUM DEPTH, ACCURACY & UNITS..." and "Echo IIs carry 16 torpedoes, betwe--e~ 16 and 99 missiles and 0 guns."The need for the various pieces of information in the knowledge representation is clear. How this representation should be created remains unanswered.The entire representation could be hand coded by the database designer. This, however, is a long and tedious process and therefore a bottleneck to the portability of TEXT.In this work, a level in the generalization hierarchy is identified that contains entities for which physical records exist in the database ~4~tabase entity classes). It is asstmled that the hierarchy above this level must be hand ceded. The information below this level, however, can be derived fr~ the contents of the database itself.The database entity classes can be subclassified on the basis of attributes whose values serve to partition the entity class into a number of mutually exclusive sub-types.For example, PEOPLE can be subclassified on the basis of attribute SEX: MALE and FEMALE. As pointed out by Lee and Gerritsen [Lee & Gerritsen 78] , some partitions of an entity class are more meaningful than others and hence more useful in describing the system's knowledge of the entity class.For example, a partition based on the primary key of the entity class would generate a single member sub-class for each instance in the database, thereby simply duplicating the contents of the database.The ENHANCE system relies on a set of world knowledge axioms to determine which attributes to use for partitioning and which resulting breakdowns are mean ing f ul.For each meaningful breakdown of an entity class, nodes are created in the generalization hierarchy.These nodes must contain the information types discussed above. ENHANCE computes this information based on the facts in the database. The attribute used to partition the entity class appears as the based DB attribute. The DDAs are a list of actual DB attributes, other than the based DB attribute, which when taken together distinguish a sub-class from all others in the breakdown. Since the sub-classes inherit all DB attributes from the entity class, the values of the attributes within the sub-class are important.ENHANCE records the values of all constant DB attributes and the range of values of any DB attributes which appear in the DDA of any sibling sub-class. These can be used by TEXT to compare the values of the DDAs of one sub-class with the values of the same attributes within a sibling sub-class. The values of relational attributes within a sub-class are also recorded by ENHANCE.The descriptive information will be used by the generation system to indicate how the sub-classes differ.It is therefore important that the most salient differences between the sub-classes are indicated. Here again, the world knowledge axioms are used to guide the systam in choosing the most salient information.The world knowledge axioms fall into three categories which reflect the extent to which they must be changed when applying ENHANCE to a new database.~hey range from very specific axioms, which must always be changed, to very general axioms, which are domain independent. The axioms and their use by the system will be described after first giving an example of a question answered by TEXT based on information created by ENHANCE.This example illustrates how TEXT integrates the above information to form meaningful comparisons between entities.This example is taken from actual TEXT output in response to the question: What is the difference between an OCEAN-ESCORT and a CRUISER?* i) The cruiser and the ocean escort are surface ships.2) The ship's surface-going capabilities are provided by the DB attributes DISPLACEMENT and DRAFT.3) The ship has DB attributes MAXIMUM_SPEED, PROPULSION, FUEL( FUEL CAPACITY and FUEL TYPE), DIMENSIONS, SPE~DEPENDENT RANGE and OFFICIAL NAME. 4) Ocean escorts have a DISPLACEMENT ~tween 3400 and 4100. 5) All ocean escorts in the ONR database have R~MARKS of 0, FUEL TYPE of BNKR, FLAG of BLBL and MAST HEIGHT--of 85. 6) Ocean escorts carry between 2 and 22 torpedoes, 16 missiles and between 1 and 2 guns. 7) A ship is classified as an ocean escort if the characters 1 through 2 of its HULL NO are DE. 8) Cruisers have a PROPULSION of S--TMTURGRD and a LENGTH between 510 and 673. 9) All cruisers in the ONR database have RE~MA~S of 0. I0) Cruisers carry between 8 and 42 torpedoes, between 4 and 98 missiles and between I and 4 guns.ii) A ship is classified as a cruiser if the characters 1 through 2 of its HULL NO are CL or the characters i through 2 o~ its HULL NO are CG.12) The ocean escort, therefore, has a smaller LENGTH and a smaller DISPLACEMENT than the cruiser.The first sentence is derived from the fact that both ocean-escorts and cruisers are sub-types of entity class ship.TEXT then goes on to describe some characteristics of the ship (sentences 2 and 3). Information about the ship is part of the hand coded representation, it is not generated by ENHANCE. Next, the distinguishing features (indicated by the DDA) of the ocean-escort are identified followed by a listing of its constant DB attributes (sentences 4 and 5). The values of the relation attributes are then identified (sentence 6) followed by a statement drawn from the based DB attribute of the ocean-escort. Next, this same type of information is used to generate parallel information about the cruiser.1~e text closes with a simple inference based on the DDAs of the two types of ships.The following example illustrates how the TEXT system uses the information generated by ENHANCE. The example is taken from actual output generated by the TEXT system in response to the question :What is an AIRCRAFT-CARRIER?. It utilizes the portion of the representation generated by ENHANCE. Following the text is a brief description of where each piece of information was found in the representation.(The sentences are numbered here to simplify the discussion: there are no sentence numbers in the actual material produced by TEXT).(i) An aircraft carrier is a surface ship with a DISPLACEMENT between 78000 and 80800 and a LENGTH between 1039 and 1063.(2) Aircraft carriers have a greater LENGTH than all other ships and a greater DISPLACEMENT than most other ships.(3) Mine warfare ships, for example, have a DISPLACEMENT of 320 and a LENGTH of 144.(4) 7%11 aircraft carriers in the ONR database have R~S of 0, FUEL TYPE of BNKR, FLAG of BLBL, BEAM of --252, ENDURANCE RANGE of 4000, ECONOMIC SPEED of 12, ENDURANCE--SPEED of 30 and PROPULSION of STM~'ORGRD? (5) A ship is classified as an aircraft carrier if the characters 1 through 2 of its HULL NO are CV.In this example, the DDAs of aircraft carrier are used to identify its features (sentence i) and to make a comparison between aircraft carriers and all other types of ships (sentences 2 and 3). Since the ENHANCE system ensures that the values of the DDAs for one sub-class appear in the DB attribute list of every other sub-class in the same breakdown, the comparisons between the sub-classes are easily calculated by the TEXT system.M~reover, since ENHANCE has selected out several attributes as more important than others (based on the world knowledge axioms), TEXT can make a meaningful comparison instead of one less relevant.The final sentence is derived from the based DB attribute of aircraft carrier.There are several extensions of the ENHANCE system which would make the knowledge representation more closely reflect the real world. These include (i) the use of very specific axioms in the calculation of descriptive information and (2) the use of relational information as the basis for a breakdown.At the present time, all descriptive sub-class information is calculated from the actual contents of the database, although sub-class formation may be based on the very specific axioms. The database contents may not adequately capture the real world distinctions between the sub-classes.For this reason, a set of very specific axioms specifying descriptive information could be adopted. The need for such axioms can best be seen in the DDA generated for ship sub-type AIRCRAFT-CARRIER.Since there are no attributes in the database indicating the function of a ship, there is no way of using the fact that the function of an AIRCRAFT-CARRIER is to carry aircraft to distinguish AIRCRAFT-CARRIERS from other ships. This is, however, a very important real world distinction.Very specific axioms could be developed to allow the user to specify these important distinctions not captured the the contents of the database.The ENHANCE system could also be improved by utilizing the relational information when creating the breakdowns.For example, missiles can be divided into sub-classes on the basis of what kind of vehicles they are carried by.AIR-TO-AIR and AIR-TO-SURFACE missiles are carried on aircraft, while SURFACE-TO-SURFACE missiles are carried on ships.Thus, the relations often contain important sub-class distinctions that could be used by the system. | A system has been described which automatically creates part of a knowledge representation used for natural language generation. 'IRis enables the generation system to give a richer description of the database, since the information generated by ENHANCE can be used to make comparisons between sub-classes which would otherwise require use of extensive inferencing. ENHANCE generates sub-classes of the entity classes in the database;it uses a set of world knowledge axioms to guide the formation of the sub-classes.The axioms ensure the sub-classes are meaningful and that salient information is chosen for the sub-class descriptions. This in turn ensures that the generation system will have salient information available to use making the generated text more meaningful to the user. 9.0 ACKNCWLEDGEMENTS I would like to thank Aravind Joshi and Kathleen McKeown for their many helpful comments throughout the course of this work, and Bonnie Webber, Eric Mays, and Sitaram Lanka for their comments on the content and style of this paper. | Main paper:
knowledge representation for g~eration:
The TEXT system answers three types of questions about database structure:(i) requests for the definition of an entity;(2) requests for the information available about an entity; (3) requests concerning the difference between entities.It was implemented and tested using a portion of an 0NR database which contained information about vehicles and destructive devices.TEXT needs several types of information to answer the above questions. Some of this can be provided by features found in a variety of standard database models [Chen 76], [Smith & Smith 77] , [Lee & Gerritsen 78] .Of these, TEXT uses a generalization hierarch Z on the entities in order to define or identify them in terms of (I) their constituents (e.g. "There are two types of entities in the ONR database: destructive devices and vehicles."*) (2) their superordinates (e.g. "A destroyer is a surface ship ..A bomb is a free falling projectile." and "A whiskey is an underwater submarine ...").Each node in the hierarchy contains additional descriptive information based on standard features which is used to identify the database information associated with each entity and to indicate the distinguishing features of the entities.* The quoted material is excerpted from actual output from TEXT.One type of comparison that TEXT must ger~erate has to do with indicating why a particular individual falls into one entity sub-class as opposed to another. For example, "A ship is classified as an ocean escort if the characters 1 through 2 of its HULL NO are DE ... A ship is classified as a cruis--er if the characters 1 through 2 of its HULL NO are CG." and "A submarine is classified as an e~ho II if its CLASS is ECHO II." In order to generate this kind of comparison, TEXT must have available database information indicating the reason for a split in the generalization hierarchy. This information is provided in the based DB attribute.In comparing two entities, TEXT must be able to identify the major differences between them. Part of this difference is indicated by the descriptive distinguishing features of the entities. For example, "The missile has a target location in the air or on the earth's surface ... The torpedo has an underwater target location." and "A whiskey is an underwater submarine with a PROPULSION TYPE of DIESEl and a FLAG of RDOR." These dist'inguishing features consist of a number of attribute-value* pairs associated with each entity.They are provided in an information type termed the distinguishing descriptive attributes (DDAs) of an entity.In order for TEXT to answer questions about the information available about an entity, it must have access to the actual database information associated with each entity in the generalization hierarchy. This information is provided in what are termed the actual DB attributes (and constant values) and the r ela'~i6nal atEr ibutes (and values) .This informa£ioh -is also useful in comparing the attributes and relations associated with various entities.For example, "Other DB attributes of the missile include PROBABILITY OF KILL, SPEED, ALTI~DE ... Other DB attributes -of-the torpedo include FUSE TYPE, MAXIMUM DEPTH, ACCURACY & UNITS..." and "Echo IIs carry 16 torpedoes, betwe--e~ 16 and 99 missiles and 0 guns."
augmenting the knowledge representation:
The need for the various pieces of information in the knowledge representation is clear. How this representation should be created remains unanswered.The entire representation could be hand coded by the database designer. This, however, is a long and tedious process and therefore a bottleneck to the portability of TEXT.In this work, a level in the generalization hierarchy is identified that contains entities for which physical records exist in the database ~4~tabase entity classes). It is asstmled that the hierarchy above this level must be hand ceded. The information below this level, however, can be derived fr~ the contents of the database itself.The database entity classes can be subclassified on the basis of attributes whose values serve to partition the entity class into a number of mutually exclusive sub-types.For example, PEOPLE can be subclassified on the basis of attribute SEX: MALE and FEMALE. As pointed out by Lee and Gerritsen [Lee & Gerritsen 78] , some partitions of an entity class are more meaningful than others and hence more useful in describing the system's knowledge of the entity class.For example, a partition based on the primary key of the entity class would generate a single member sub-class for each instance in the database, thereby simply duplicating the contents of the database.The ENHANCE system relies on a set of world knowledge axioms to determine which attributes to use for partitioning and which resulting breakdowns are mean ing f ul.For each meaningful breakdown of an entity class, nodes are created in the generalization hierarchy.These nodes must contain the information types discussed above. ENHANCE computes this information based on the facts in the database. The attribute used to partition the entity class appears as the based DB attribute. The DDAs are a list of actual DB attributes, other than the based DB attribute, which when taken together distinguish a sub-class from all others in the breakdown. Since the sub-classes inherit all DB attributes from the entity class, the values of the attributes within the sub-class are important.ENHANCE records the values of all constant DB attributes and the range of values of any DB attributes which appear in the DDA of any sibling sub-class. These can be used by TEXT to compare the values of the DDAs of one sub-class with the values of the same attributes within a sibling sub-class. The values of relational attributes within a sub-class are also recorded by ENHANCE.The descriptive information will be used by the generation system to indicate how the sub-classes differ.It is therefore important that the most salient differences between the sub-classes are indicated. Here again, the world knowledge axioms are used to guide the systam in choosing the most salient information.The world knowledge axioms fall into three categories which reflect the extent to which they must be changed when applying ENHANCE to a new database.~hey range from very specific axioms, which must always be changed, to very general axioms, which are domain independent. The axioms and their use by the system will be described after first giving an example of a question answered by TEXT based on information created by ENHANCE.This example illustrates how TEXT integrates the above information to form meaningful comparisons between entities.This example is taken from actual TEXT output in response to the question: What is the difference between an OCEAN-ESCORT and a CRUISER?* i) The cruiser and the ocean escort are surface ships.2) The ship's surface-going capabilities are provided by the DB attributes DISPLACEMENT and DRAFT.3) The ship has DB attributes MAXIMUM_SPEED, PROPULSION, FUEL( FUEL CAPACITY and FUEL TYPE), DIMENSIONS, SPE~DEPENDENT RANGE and OFFICIAL NAME. 4) Ocean escorts have a DISPLACEMENT ~tween 3400 and 4100. 5) All ocean escorts in the ONR database have R~MARKS of 0, FUEL TYPE of BNKR, FLAG of BLBL and MAST HEIGHT--of 85. 6) Ocean escorts carry between 2 and 22 torpedoes, 16 missiles and between 1 and 2 guns. 7) A ship is classified as an ocean escort if the characters 1 through 2 of its HULL NO are DE. 8) Cruisers have a PROPULSION of S--TMTURGRD and a LENGTH between 510 and 673. 9) All cruisers in the ONR database have RE~MA~S of 0. I0) Cruisers carry between 8 and 42 torpedoes, between 4 and 98 missiles and between I and 4 guns.ii) A ship is classified as a cruiser if the characters 1 through 2 of its HULL NO are CL or the characters i through 2 o~ its HULL NO are CG.12) The ocean escort, therefore, has a smaller LENGTH and a smaller DISPLACEMENT than the cruiser.The first sentence is derived from the fact that both ocean-escorts and cruisers are sub-types of entity class ship.TEXT then goes on to describe some characteristics of the ship (sentences 2 and 3). Information about the ship is part of the hand coded representation, it is not generated by ENHANCE. Next, the distinguishing features (indicated by the DDA) of the ocean-escort are identified followed by a listing of its constant DB attributes (sentences 4 and 5). The values of the relation attributes are then identified (sentence 6) followed by a statement drawn from the based DB attribute of the ocean-escort. Next, this same type of information is used to generate parallel information about the cruiser.1~e text closes with a simple inference based on the DDAs of the two types of ships.
world knowledge axioms:
In order for the generation system to give meaningful descriptions of the database, the knowledge representation must effectively capture both a typical user's view of the domain and how that domain has been modelled within the system. Without real world knowledge indicating what a user finds meaningful, there are several ways in which an automatically generated taxonomy may deviate from how a user views the domain:(I) the representation may fail %o capture the user's preconceived notions of how a certain database * The sentences are numbered here to simplify the discussion:there are no sentence n~nbers in the actual material produced by TEXT. entity class should be partitioned into sub-classes;(2) the system may partition an entity class on the basis of a non-salient attribute leading to an inappropriate breakdown;(3) non-salient information may be chosen to describe the sub-classes leading to inappropriate descriptions;(4) a breakdown may fail to add meaning to the representation (e.g. a partition chosen may simply duplicate information already available).qhe first case will occur if the sub-types of these breakdowns are not completely reflected in the database attribute names and values. For example, even though the partition of SHIP into its various types (e.g. Aircraft-Carrier, Destroyer, etc.) is very common, there may be no attribute SHIP TYPE in the database to form this partition. Th~ partition can be derived, however, if a semantic mapping between the sub-type names and existing attribute-value pairs can be identified. In this case, the partition can be derived by associating the first few characters of attribute HULL NO with the various ship-types. The ~ s~:~ific axioms are provided as a means for defl-ning such mappings.The taxonomy may also deviate from what a user might expect if the system partitions an entity class on the basis of non-salient attributes.It seems very natural to have a breakdown of SHIP based on attribute CLASS, but one based on attribute FUEL-CAPACITY would seem less appropriate.A partition based on CLASS would yield sub-classes of SHIP such as SKORY and KITFY-HAWK, while one on FUEL CAPACITY could only yield ones like SHI PS-4~q~H-10 0-FUEL-CAPAC ITY. Since saliency is not an intrinsic property of an attribute, there must be a way of indicating attributes salient in the domain.The specific axioms are provided for this purpose.The user's view of the domain will not be captured if the information chosen to describe the sub-classes is not chosen from attributes important to the domain. Saliency is crucial in choosing the descriptive information (particularly the DDAS) for the sub-classes.Even though a DESTROYER may be differentiated from other types of ships by its ECONOMIC-SPEED, it seems more informative to distinguish it in terms of the more commonly mentioned property DISPLACEMENT. Here again, this saliency information is provided by the specific axioms.A final problem faced by a system which only relies on the database contents is that a partition formed may be essentially meaningless (adding no new information to the representation). This will occur if all of the instances in the database fall into the same sub-cl~ss or if each falls into a different one.Such breakdowns either exactly reflect the entity class as a whole, or reflect the individual instances. This same type of problem occurs if the only difference between two sub-classes is the attribute the breakdown is based on.Thus, no trend can be found among the other attributes within the sub-classes formed. Such a breakdown would add no information that could not be trivially derived from the database itself. These types of breakdowns are "filtered out" using the @eneral ax{oms.The world knowledge axioms guide ENHANCE to ensure that the breakdowns formed are appropriate and that salient information is chosen for the sub-class descriptions.At the same time, the axioms give the designer control over the representation formed. The axioms can be changed and the system rerun. The new representation will reflect the new set of world knowledg e axioms. In this way, the database designer can tune the representation to his/her needs. Each axiom category, how they are used by ENHANCE, and the problems each category solves are discussed below.The very specific axioms give the user the most control over the representation formed. They let the user specify breakdowns that s/he would a priori like to appear in the knowledge representation. The axioms are formulated in such a way as to allow breakdowns On parts of the value field of a character attribute, and on ranges of values for a numeric attribute (examples of each are given below). This type of breakdown could not be formed without explicit information indicating the defining portions of the attribute value field and their associated semantic values.A sample use of the very specific axioms can be found in classifying ships by their type (ie. Aircraft-carriers, Destroyers, Mine-warfare-ships, etc...), qhis is a very common breakdown of ships. Assume there is no database attribute which explicitly gives the ship type. With no additional information, there is no way of generating that breakdown for ship. A user knowledgeable of the domain would note that there is a way to derive the type of a ship based on its HULL NO. In fact, the first one or two characters of [he HULL NO uniquely identifies the ship type. ~Dr example,--all AIRCRAFT-CARRIERS have a HULL NO whose first two characters are CV, while the fi?st two characters of the HULL NO of a CRUISER are CA or CG or CL. This information can be captured in a very specific axiom which maps part of a character attribute field into the sub-type names. An example of such an axiom is shown in Figure i. (SHIP "SHIP HULL NO" "OTHER-SH IP-TYPE" (I 2 "C~' "AIRCRAFT-CARRIER") (i 2 "CA" "CRUISER") (I 2 "CG" "CRUISER") (i 2 "CL" "CRUISER") (i 2 "DD" "DESTROYER") (i 2 "DL" "FRIGATE") (I 2 "DE" "OCEAN-ESCORT") (i 2 "PC" "PATROL-SHIP-AND-CRAFT") (i 2 "PG" "PATROL-SHIP-AND-CRAFT") (i 2 "PT" "PATROL-SHIP-AND-CRAFT") (i 1 "L" "AMPHIBIOUS-AND-LANDING-SHIP") (i 2 "MC" ,MINE-WARFARE-SHIP") (I 2 "MS" "MINE-WARFARE-SHIP") (i 1 "A" "AUXILIARY-SHIP")) Figure I . Very Specific (Character) AxiomSub-typing of entities may also be specified based on the ranges of values of a numeric attribute. For example, the entity BCMB is often sub-typed by the range of the attribute BOMB WEIGHT. A BOMB is classified as being HEAVY if i~s weight is above 900, MEDIUM-WEIGHT if it is between 100 and 899, and LIGHT-WEIGHT if its weight is less than i00. An axiom which specifies this is shown in FIGURE 2.(BOMB "BCMB WEIGHT""OTHER-WEIGHT-BOMB" (900 99999 "HEAVY-BOMB") (i00 899 "MEDIUM-WEIGHT-BOMB" ) (0 99 "LIGHT-WEIGHT-BOMB") ) Figure 2 . Very Specific (Numeric) AxiomFormation of the very specific axioms requires in-depth knowledge of both the domain the database reflects, and the database itself. Knowledge of the domain is required in order to make common classifications (breakdowns) of objects in the domain. Knowledge of the database structure is needed in order to convey these breakdowns in terms of the database attributes. It should be noted that this type of axiom is not required for the system to run. If the user has no preconceived breakdowns which should appear in the representation, no very specific axioms need to be specified.The specific axioms afford the user less control than the very specific axioms, but are still a powerful device.The specific axioms point out which database attributes are more important in the domain than others. They consist of a single list of database attributes called the im~ortant attributes list. The important at£ributes list does not "control" the system as the very specific axioms do. Instead it suggests paths for the system to try; it has no binding effects. The important attributes list used for testing ENHANCE on the ONR database is shown in Figure 3 . Figure 3 . Important Attributes List ENHANCE has two major uses for the important attributes list: (i) It attempts to form breakdowns based on some of the attributes in the list.(2) It uses the list to decide which attributes to use as DDAs for a sub-class. ENHANCE must decide which attributes are better as the basis for a breakdown and which are better for describing the resulting sub-classes. While most attributes important to the domain are good for descriptive purposes, character attributes are better than others as the basis for a breakdown. Attributes with character values can more naturally be the basis for a breakdown since they have a small set of legal values. A breakdown based on such an attribute leads to a small well-defined set of sub-classes. Nt~meric attributes, on the other hand, often have an infinite number of legal values.A breakdown based on individual numeric values could lead to a potentially infinite number of sub-classes. This distinction between numeric and character (symbolic) attributes is also used in the TEAM system [Grosz et. al. 82] . ENHANCE first attempts to form breakdowns of an entity based on character attributes from the important attributes list.Only if no breakdowns result from these attempts, does the system attempt breakdowns based on numeric attributes.The important attributes list also plays a major role in selecting the distinguishing descriptive attributes (DDAs) for a particular sub-class.Recall that the DDAs are a set of attributes whose values differentiate one sub-class from all other sub-classes in the same breakdown. It is often the case that several sets of attributes could serve this purpose. In this situation, the important attributes list is consulted in order to choose the most salient distinguishing features. The set of attributes with the highest number of attributes on the important attributes list is chosen.The important attributes list affords the user less control over the representation formed than the very specific axioms since it only suggests paths for the system to take. The system attempts to form breakdowns based on the attributes in the list, but these breakdowns are subjected to tests encoded in the general axioms which are not used for breakdowns formed by the very specific axioms. Breakdowns formed using the very specific axioms are not subjected to as many tests since they were explicitly specified by the database designer.The final type of world knowledge axioms used by ENHANCE are the general axioms. These axioms are domain independent and need not be changed by the user. They encode general principles used for deciding such things as whether sub-classes formed should be added to the knowledge representation, and how sub-classes should be named.The ENHANCE system must be capable of naming the sub-classes. The name must uniquely identify a sub-class and should give some semantic indication of the contents of the sub-class. At the same time, they should sound reasonable to the ~HANCE user.These problems are handled by the general axioms entitled naming conventions. An example of a naming convention is:Rule 1 -The name of a sub-class of entity ENT formed using a character* attribute with value VAL will be: VAL-ENT.Examples of sub-classes named using this rule include: WHISKY-SUBMARINE and FORRESTAL-SHIP.The ENHANCE system must also ensure that each of the sub-classes in a particular breakdown are meaningful. For instance, some of the sub-classes may contain only one individual from the database. If several such sub-classes occur, they are combined to form a CLASS-OTHER sub-class. This use of CLASS-OTHER compacts the representation while indicating that a number of instances are not similar enough to any others to form a sub-class. The DDA for CLASS-OTHER indicates what attributes are common to all entity instances that fail to make the criteria for membership in any of the larger named sub-classes. Without CLASS-OTHER this information would have to be derived by the generation system; this is a potentially time consuming process.The general axioms contain several rules which will block the formation of "CLASS-OTHER" in circumstances where it will not add information to the representation. These * This is a slight simplification of the rule actually used by EN}~NCE, see [McCoy 82 ] for further details.Rule 2 -Do not form CLASS-(TfHER if it will contain only one individual.Rule 3 -Do not form CLASS-OTHER if it will be the only child of a superordinate.Perhaps the most important use of the general axioms is their role in deciding if an entire breakdown adds meaning to the knowledge representation.The general axioms are used to "filter out" breakdowns whose sub-classes either reflect the entity class as a whole, Or the actual instances in the database.They also contain rules for handling cases when no differences between the sub-classes can be found. Examples of these rules include:Rule 4 -If a breakdown results in the formation of only one sub-type, then do not use that breakdown.Rule 5 -If every sub-class in two different breakdowns contains exactly the same individuals, then use only one of the breakdowns.
system overview:
The ENHANCE system consists of ~ set of independent modules; each is responsible for generating some piece of descriptive information for the sub-classes. When the system is invoked for a particular entity class, it first generates a number of breakdowns based on the values in the database. These breakdowns are passed from one module to the next and descriptive information is generated for each sub-class involved. This process is overseen by the general axioms which may throw out breakdowns for which descriptive information can not be generated.Before generating the breakdowns from the values in the database, the constraints on the values are checked and all units are converted to a common value. Any attribute values that fail to meet the constraints are noted in the representation and not used in the calculation. From these values a number of breakdowns are generatc~d using the very specific and specific axioms.The breakdowns are first passed to the "fitting algoritl~n". ~en two or more breakdowns are generated for an entity-class, the sub-classes in one breakdown may be contained in the sub-classes of the other.In this case, the sub-classes in the first breakdown should appear as the children of the sub-classes of the second breakdown, adding depth to tl~ hierarchy. ~e fitting algorit|un is used to calculate where the sub-classes fit in the generalization hierarchy. After the fitting algoritt~ is run, the general axioms may intervene to throw out any breakdowns which are essentially duplicates of other breakdowns (see rule 5 above).At this point, the DDAs of the sub-classes within each breakdown are calculated. The algorithm used in this calculation is described below to illustrate the combinatoric nature of the augmentation process. If no DDAs can be found for a breakdown formed using the important attributes list, the general axioms may again intervene to throw out that breakdown.Flow of control then passes through a number of modules responsible for calculating the based DB attribute and for recording constant DB attributes and relation attributes. The actual nodes are then generated and added to the hierarchy.Generating the descriptive information for the sub-classes involves combinatoric problems which depend on the number of records for each entity in the database and the number of sub-classes formed for these entities. The ENHANCE system was implemented on a VAX 11/780, and was tested using a portion of an ONR database containing 157 records.It generated sub-type information for 7 entities and ran in approximately 159157 CPU seconds. For a database with many more records, the processing time may grow exponentially. This is not a major problem since the system is not interactive;it can be run in batch mode. In addition, it is run only once for a particular database. After it is run, the resulting representation can be used by the interactive generation system on all subsequent queries.A brief outline of the processing involved in generating the DDAs of a particular sub-class will be given. This process illustrates the kind of combinatoric problems encountered in automatic generation of sub-type information making it unreasonable computation for an interactive generation system.The Distinguishing Descriptive Attributes (DDAs) of a sub-class is a set of attributes, other than the based DB attribute, whose collective value differentiates that sub-class from all other sub-classes in the same breakdown. Finding the DDA of a sub-class is a problem which is ccmbinatoric in nature since it may require looking at all combinations of the attributes of the entity class.This problem is accentuated since it has been found that in practice, a set of attributes which differentiates one sub-class from all other sub-classes in the same breakdown does not always exist.Unless this problem is identified ahead of time, the system would examine all combinations of all of the attributes before deciding the sub-class can not be distinguished.There are several features of the set of DDAs which are desirable.(i) The set should be as s,~all as possible. (2) It should be made up of salient attributes (where possible).(3) The set should add information about that sub-class not already derivable from the representation. In other words, they should be different from the DDAS of the parent.A method for generating the DDAs could involve simply generating all 1-combinations of attributes, followed by 2-combinations etc.. until a set of attributes is found which differentiates the sub-class.Attributes that appeared in the DDA of the immediate parent sub-class would not be included in the combinations formed.To ensure that the DDA was made up of the most salient attributes, combinations of attributes from the important attributes list could be generated first. This method, however, does not avoid any of the combinatoric problems involved in the processing.To avoid some of these problems, a pre-processor to the combination stage of the calculation was developed. The combinations are formed of only potential-DDAs. These are a set of attributes whose value -can be used to differentiate the sub-class from at least one other sub-class.The attributes included in potential-DDAs take on a value within the sub-class that is different from the value the attributes take on in at least one other sub-class. Using the potential-DDAs ensures that each attribute in a given combination is useful in distinguishing the sub-class from all others.Calculating the potential-DDAs requires comparing the values of the attributes within the sub-class with the values within each other sub-class in turn.This calculation yields two other pieces of important information. If for a particular sub-class this comparison yields only one attribute, then this attribute is the only means for differentiating that sub-class from the sub-class the DDAs are being calculated for.In order for the DDA to differentiate the sub-class from all others, it must contain that attribute. Attributes of this type are called definite-DDAs. The second type of information identified has to do with when the sub-class can not be differentiated from all others. The comparing of attribute values of sub-classes makes immediately apparent when the DDA for a sub-class can not be found.In this case, the general axioms would rule out the breakdown containing that sub-class.* Assuming that the sub-class is found to be distinguishable, the system uses the potential-DDAs and the definite-DDAs to find the smallest and most salient set of attributes to use as the DDA. It forms combination of attributes using the definite-DDAs and me~rs of the potential-DDAs. The important attributes list is consulted to ensure that the most salient attributes are chosen as the DDA.There is a time/space tradeoff in using a * There are several cases in which ENHANCE would not rule out the breakdown, see [McCoy 82 ] for details. system like ENHANCE. Once the ~CE system is run, the generation system is relieved from the time consuming task of sub-type inferencing. ~his means, however, that a much larger knowledge representation for the generation system's use results. Since the generation system must be concerned with the amount of time it takes to answer a question, the cost of the larger knowledge representation is well worth the savings in inferencing time. If, however, at some future point, time is no longer a major factor in natural language generation, many of the ideas put forth here could be used to generate the sub-type information only as it is needed.
use of representation created by enhance:
The following example illustrates how the TEXT system uses the information generated by ENHANCE. The example is taken from actual output generated by the TEXT system in response to the question :What is an AIRCRAFT-CARRIER?. It utilizes the portion of the representation generated by ENHANCE. Following the text is a brief description of where each piece of information was found in the representation.(The sentences are numbered here to simplify the discussion: there are no sentence numbers in the actual material produced by TEXT).(i) An aircraft carrier is a surface ship with a DISPLACEMENT between 78000 and 80800 and a LENGTH between 1039 and 1063.(2) Aircraft carriers have a greater LENGTH than all other ships and a greater DISPLACEMENT than most other ships.(3) Mine warfare ships, for example, have a DISPLACEMENT of 320 and a LENGTH of 144.(4) 7%11 aircraft carriers in the ONR database have R~S of 0, FUEL TYPE of BNKR, FLAG of BLBL, BEAM of --252, ENDURANCE RANGE of 4000, ECONOMIC SPEED of 12, ENDURANCE--SPEED of 30 and PROPULSION of STM~'ORGRD? (5) A ship is classified as an aircraft carrier if the characters 1 through 2 of its HULL NO are CV.In this example, the DDAs of aircraft carrier are used to identify its features (sentence i) and to make a comparison between aircraft carriers and all other types of ships (sentences 2 and 3). Since the ENHANCE system ensures that the values of the DDAs for one sub-class appear in the DB attribute list of every other sub-class in the same breakdown, the comparisons between the sub-classes are easily calculated by the TEXT system.M~reover, since ENHANCE has selected out several attributes as more important than others (based on the world knowledge axioms), TEXT can make a meaningful comparison instead of one less relevant.The final sentence is derived from the based DB attribute of aircraft carrier.
future work:
There are several extensions of the ENHANCE system which would make the knowledge representation more closely reflect the real world. These include (i) the use of very specific axioms in the calculation of descriptive information and (2) the use of relational information as the basis for a breakdown.At the present time, all descriptive sub-class information is calculated from the actual contents of the database, although sub-class formation may be based on the very specific axioms. The database contents may not adequately capture the real world distinctions between the sub-classes.For this reason, a set of very specific axioms specifying descriptive information could be adopted. The need for such axioms can best be seen in the DDA generated for ship sub-type AIRCRAFT-CARRIER.Since there are no attributes in the database indicating the function of a ship, there is no way of using the fact that the function of an AIRCRAFT-CARRIER is to carry aircraft to distinguish AIRCRAFT-CARRIERS from other ships. This is, however, a very important real world distinction.Very specific axioms could be developed to allow the user to specify these important distinctions not captured the the contents of the database.The ENHANCE system could also be improved by utilizing the relational information when creating the breakdowns.For example, missiles can be divided into sub-classes on the basis of what kind of vehicles they are carried by.AIR-TO-AIR and AIR-TO-SURFACE missiles are carried on aircraft, while SURFACE-TO-SURFACE missiles are carried on ships.Thus, the relations often contain important sub-class distinctions that could be used by the system.
conclusion:
A system has been described which automatically creates part of a knowledge representation used for natural language generation. 'IRis enables the generation system to give a richer description of the database, since the information generated by ENHANCE can be used to make comparisons between sub-classes which would otherwise require use of extensive inferencing. ENHANCE generates sub-classes of the entity classes in the database;it uses a set of world knowledge axioms to guide the formation of the sub-classes.The axioms ensure the sub-classes are meaningful and that salient information is chosen for the sub-class descriptions. This in turn ensures that the generation system will have salient information available to use making the generated text more meaningful to the user. 9.0 ACKNCWLEDGEMENTS I would like to thank Aravind Joshi and Kathleen McKeown for their many helpful comments throughout the course of this work, and Bonnie Webber, Eric Mays, and Sitaram Lanka for their comments on the content and style of this paper.
:
The knowledge representation is an important factor in natural language generation since it limits the semantic capabilities of the generation system. This paper identifies several information types in a knowledge representation that can be used to generate meaningful responses to questions about database structure.Creating such a knowledge representation, however, is a long and tedious process. A system is presented which uses the contents of the database to form part of this knowledge representation automatically. It employs three types of world knowledge axioms to ensure that the representation formed is meaningful and contains salient information.representation reflects both the database contents and the database designer's view of the world.One important class of questions involves comparing database entities. The system's knowledge representation must therefore contain meaningful information that can be used to make comparisons (analogies) between various entity classes. This paper focuses specifically on those aspects of the knowledge representation generated by ENHANCEwhich facilitate the use of analogies. An overview of the knowledge representation used by TEXT is first given. This is followed by a discussion of how part of this representation is automatically created by ENHANCE.In order for a user to extract meaningful information from a database system, s/he must first understand the system's view of the world what information the system contains and what that information represents. An optimal way of acquiring this knowledge is to interact, in natural language, with the system itself, posing questions to it about the structure of its contents.The TEXT system [McKeown 82 ] was developed to faci~te this type of interaction.In order to make use of the TEXT system, a system's knowledge about itself must be rich enough to support the generation of interesting texts about the structure of its contents. As I will demonstrate, standard database models [Chen 76] , [Smith & Smith 77] are not sufficient to support this type of generation. Moreover, since time is such an important factor when generating answers, and extensive inferencing is therefore not practical, the system's self knowledge must be i~ediately available in its knowledge representation.Tne ENHANCE system, described here, has been developed to augment a database schema with the kind of information necessary for generating informative answers to users' queries.The ENHANCE system creates part of the knowledge representation used by TEXT based on the contents of the database. A set of world knowledge axioms are used to ensure that this knowledge ~rk was partially supported by National Science 5oundatlon grant #MCS81-07290.
Appendix:
| null | null | null | null | {
"paperhash": [
"keown|the_text_system_for_natural_language_generation:_an_overview",
"lee|extended_semantics_for_generalization_hierarchies",
"smith|database_abstractions:_aggregation_and_generalization",
"chen|the_entity-relationship_model:_toward_a_unified_view_of_data",
"mckeown|the_text_system_for_natural_language_generation:_an_overview",
"mckeown|generating_natural_language_text_in_response_to_questions_about_database_structure",
"mccoy|the_enhance_system:_creating_meaningful_sub-types_in_a_database_knowledge_representation_for_natural_language_generation",
"chen|the_entity-relationship_model:_towards_a_unified_view_of_data"
],
"title": [
"THE TEXT SYSTEM FOR NATURAL LANGUAGE GENERATION: AN OVERVIEW",
"Extended semantics for generalization hierarchies",
"Database abstractions: aggregation and generalization",
"The entity-relationship model: toward a unified view of data",
"The Text System for Natural Language Generation: an Overview",
"Generating natural language text in response to questions about database structure",
"The ENHANCE System: Creating Meaningful Sub-Types in a Database Knowledge Representation for Natural Language Generation",
"The Entity-Relationship Model: Towards a unified view of Data"
],
"abstract": [
"Computer-based generation of natural language requires consideration of two different types of problems: 1) determining the content and textual shape of what is to be said, and 2) transforming that message into English. A computational solution to the problems of deciding what to say and how to organize it effectively is proposed that relies on an interaction between structural and semantic processes. Schemas, which encode aspects of discourse structure, are used to guide the generation process. A focusing mechanism monitors the use of the schemas, providing constraints on what can be said at any point. These mechanisms have been implemented as part of a generation method within the context of a natural language database system, addressing the specific problem of responding to questions about database structure.",
"This paper examines the notion of a generalization abstraction proposed by Smith and Smith and considers the properties of a collection of generalizations as a unit called a 'generalization hierarchy'. Presented here is a more detailed representation which consists of a hybrid between a (graphical) network and a predicate calculus formalism.",
"Two kinds of abstraction that are fundamentally important in database design and usage are defined. Aggregation is an abstraction which turns a relationship between objects into an aggregate object. Generalization is an abstraction which turns a class of objects into a generic object. It is suggested that all objects (individual, aggregate, generic) should be given uniform treatment in models of the real world. A new data type, called generic, is developed as a primitive for defining such models. Models defined with this primitive are structured as a set of aggregation hierarchies intersecting with a set of generalization hierarchies. Abstract objects occur at the points of intersection. This high level structure provides a discipline for the organization of relational databases. In particular this discipline allows: (i) an important class of views to be integrated and maintained; (ii) stability of data and programs under certain evolutionary changes; (iii) easier understanding of complex models and more natural query formulation; (iv) a more systematic approach to database design; (v) more optimization to be performed at lower implementation levels. The generic type is formalized by a set of invariant properties. These properties should be satisfied by all relations in a database if abstractions are to be preserved. A triggering mechanism for automatically maintaining these invariants during update operations is proposed. A simple mapping of aggregation/generalization hierarchies onto owner-coupled set structures is given.",
"A data model, called the entity-relationship model, which incorporates the semantic information in the real world is proposed. A special diagramatic technique is introduced for exhibiting entities and relationships. An example of data base design and description using the model and the diagramatic technique is given. The implications on data integrity, information retrieval, and data manipulation are discussed.",
"Computer-based generation of natural language requires consideration of two different types of problems: i) determining the content and textual shape of what is to be said, and 2) transforming that message into English. A computational solution to the problems of deciding what to say and how to organize it effectively is proposed that relies on an interaction between structural and semantic processes. Schemas, which encode aspects of discourse structure, are used to guide the generation process. A focusing mechanism monitors the use of the schemas, providing constraints on what can be said at any point. These mechanisms have been implemented as part of a generation method within the context of a natural language database system, addressing the specific problem of responding to questions about",
"There are two major aspects of computer-based text generation: (1) determining the content and textual shape of what is to be said; and (2) transforming that message into natural language. Emphasis in this research has been on a computational solution to the questions of what to say and how to organize it effectively. A generation method was developed and implemented in a system called TEXT that uses principles of discourse structure, discourse coherency, and relevancy criterion. \nThe main features of the generation method developed for the TEXT strategic component include (1) selection of relevant information for the answer, (2) the pairing of rhetorical techniques for communication (such as analogy) with discourse purposes (for example, providing definitions) and (3) a focusing mechanism. Rhetorical techniques, which encode aspects of discourse structure, are used to guide the selection of propositions from a relevant knowledge pool. The focusing mechanism aids in the organization of the message by constraining the selection of information to be talked about next to that which ties in with the previous discourse in an appropriate way. \nThis work on generation has been done within the framework of a natural language interface to a database system. The implemented system generates responses of paragraph length to questions about database structure. Three classes of questions have been considered: questions about information available in the database, requests for definitions, and questions about the differences between database entities. \nThe main theoretical results of this research have been on the effect of discourse structure and focus constraints on the generation process. A computational treatment of rhetorical devices has been developed which is used to guide the generation process. Previous work on focus of attention has been extended for the task of generation to provide constraints on what to say next. The use of these two interacting mechanisms constitutes a departure from earlier generation systems. The approach taken in this research is that the generation process should not simply trace the knowledge representation to produce text. Instead, communicative strategies people are familiar with are used to effectively convey information. This means that the same information may be described in different ways on different occasions.",
"The ENHANCE system: Creating Meaningful Sub-Types in a Database Knowledge Representation For Natural Language Generation Kathleen Filliben McCoy SUPERVISOR: Aravind K. Joshi The knowledge representation is an important factor in natural language generation since it limits the semantic capabilities of the generation system. It is, however, a tedious task to hand code a knowledge representation which reflects both a user's view of a domain and the way that domain is .modelled in the database. A system is presented which uses the contents of the database to form part of a database knowledge representation automatically. It augments a database schema depicting the database structure used for natural language generation. Computational solutions are presented for deriving the information types contained in the schema. Three types of world knowledge axioms are used to ensure that the representation formed is meaningful and contains salient information.",
"An improved method of operation is provided for a catalytic, low pressure process for continuously reforming a hydrocarbon charge stock boiling in the gasoline range in order to produce a high octane effluent stream in which process the hydrocarbon charge stock and hydrogen are continuously contacted in a reforming zone with a reforming catalyst containing a catalytically effective amount of a platinum group metal at reforming conditions including a pressure of 25 to 250 psig. The improved method of operation involves continuously adding a refractory light hydrocarbon to the reforming zone in an amount sufficient to result in a mole ratio of refractory light hydrocarbon to hydrogen entering the reforming zone of about 0.4:1 to about 10:1. Moreover, the refractory light hydrocarbon addition is commenced at start-up of the process and continued throughout the duration of the reforming run. The principal advantage associated with this improved method of operation is increased stability of the reforming catalyst and particularly, increased temperature stability at octane."
],
"authors": [
{
"name": [
"Kathleen Keown"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"Ronald M. Lee",
"R. Gerritsen"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"J. Smith",
"DianeC . P. Smith"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"Peter P. Chen"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"K. McKeown"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"K. McKeown"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"Kathleen F. McCoy"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"Peter P. Chen"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
}
]
}
],
"arxiv_id": [
null,
null,
null,
null,
null,
null,
null,
null
],
"s2_corpus_id": [
"61208346",
"17314457",
"8665905",
"52801746",
"17676483",
"62743223",
"60368486",
"260927278"
],
"intents": [
[],
[
"background"
],
[],
[],
[],
[
"methodology"
],
[
"methodology"
],
[]
],
"isInfluential": [
false,
false,
false,
false,
false,
false,
false,
false
]
} | Problem: The paper addresses the issue of knowledge representation in natural language generation systems, specifically focusing on generating meaningful responses to questions about database structure.
Solution: The paper proposes a system, ENHANCE, which automatically creates part of the knowledge representation used by the TEXT system based on the contents of the database. This system employs world knowledge axioms to ensure the representation formed is meaningful and contains salient information for facilitating the use of analogies in comparing database entities. | 512 | 0.039063 | null | null | null | null | null | null | null | null |
6464a0f156fe62b10eb5b2d7d048153e820f6c53 | 18859148 | null | Towards a Theory of Comprehension of Declarative Contexts | An outline of a theory of comprehension of declarative contexts is presented. The main aspect of the theory being developed is based on Kant's distinction between concepts as rules (we have called them conceptual specialists) and concepts as an abstract representation (schemata, frames). Comprehension is viewed as a process dependent on the conceptual specialists (they contain the inferential knowledge), the schemata or frames (they contain the declarative knowledge), and a parser. The function of the parser is to produce a segmentation of the sentences in a case frame structure, thus determininig the meaning of prepositions, polysemous verbs, noun group etc. The function of this parser is not to produce an output to be interpreted by semantic routines or an interpreter~ but to start the parsing process and proceed until a concept relevant to the theme of the text is recognized. Then the concept takes control of the comprehension process overriding the lower level linguistic process. Hence comprehension is viewed as a process in which high level sources of knowledge (concepts) override lower level linguistic processes. | {
"name": [
"Gomez, Fernando"
],
"affiliation": [
null
]
} | null | null | 20th Annual Meeting of the Association for Computational Linguistics | 1982-06-01 | 19 | 9 | null | This paper deals with a theory of computer comprehension of descriptive contexts. By "descriptive contexts" I refer to the language of scientific books, text books, this text, etc.. In the distinction performative vs. declarative, descriptive texts clearly fall in the declarative side.Recent work in natural language has dealt with contexts in which the computer understanding depends on the meaning of the action verbs and the human actions (plans, intentions, goals) indicated by them (Schank and Abelson 1977; Grosz 1977; Wilensky 1978; Bruce and Newman 1978) . Also a considerable amount of work has been done in a plan-based theory of task oriented dialogues (Cohen and Perrault 1979; Perrault and Allen 1980; Hobbs and Evans 1980) . This work has had very little bearing on a theory of ~omputer understanding of descriptive contexts.One of the main tenets of the proposed research is that descriptive (or declarative as we prefer to call them) contexts call for different theoretical ideas compared to those proposed for the understanding of human actions, although~ naturally there are aspects that are common. An important characteristic of these contexts is the predominance of descriptive predicates and verbs (verbs such as "contain," "refer," "consist of," etc.) over action verbs.A direct result of this is that the meaning of the sentence does not depend as much on the main verb of the sentence as on the concepts that make it up. Hence meaning representations centered in the main verb of the sentence are futile for these contexts.We have approached the problem of comprehension in these contexts by considering concepts both as active agents that recognize themselves and as an abstract representation of the properties of an object. This aspect of the theory being developed is based on Kant's distinction between concepts as rules (we have called them conceptual specialists) and concepts as an abstract representation (frames, schemata).Comprehension is viewed as a process dependent.on the conceptual specialists (they contain the inferential knowledge), the schemata (they contain structural knowledge), and a parser. The function of the parser is to produce a segmentation of the sentences in a case frame structure, thus determining the meaning of prepositions, polysemous verbs, noun group, etc.. But the function of this parser is not to produce an output to be interpreted by semantic routines, but to start the parsing process and to proceed until a concept relevant to the theme of the text is recognized.Then the concept (a cluster of production rules) takes control of the comprehension process overriding the lower level linguistic processes.The concept continues supervising and guiding the parsing until the sentence has been understood, that is, the meaning of the sentence has been mapped into the final internal representation.Thus a text is parsed directly into the final knowledge structures. Hence comprehension is viewed as a process in which high level sources of knowledge (concepts) override lower level linguistic processes.We have used these ideas to build a system, called LLULL, to unde{stand programming problems taken verbatim from introductory books on programming.In Kant's Critique of Pure Reason one may find two views of a concept.According to one view, a concept is a system of rules governing the application of a predicate to an object.The rule that tells us whether the predicate "large" applies to the concept Canada is a such rule. The system of rules that allows us to recognize any given instance of the concept Canada constitutes our concept of Canada. According to a second view, Kant considers a concept as an abstract representation (vorstellung) of the properties of an object.This second view of a concept is akin to the notion of concept used in such knowledge representation languages as FRL, KLONE and KIIL.Frames have played dual functions.They have been used as a way to organize the inferences, and also as a structural representation of what is remembered of a given situation.This has caused confusion between two different cognitive aspects: memory and comprehension (see Ortony, 1978) . We think that one of the reasons for this confusion is due to the failure in distinguishing between the two types of concepts (concepts as rules and concepts as a structural representation).We have based our analysis on Kant's distinction in order to separate clearly between the organization of the inferences and the memory aspect.For any given text, a thematic frame contains structural knowledge about what is remembered of a theme. One of the slots in this frame contains a list of the relevant concepts for that theme.Each of these concepts in this list is separately organized as a cluster of production rules. They contain the inferential knowledge that allows the system to interpret the information being presently processed, to anticipate incoming information, and to guide and supervise the parser (see below).In some instances, the conceptual specialists access the knowledge stored in the thematic frame to perform some of these actions. | null | In this section we explain some of the components of the parser so that the reader can follow the discussion of the examples in the next section. We refer the reader to Gomez (1981) for a detailed description of these concepts. Noun Group: The function that parses the noun group is called DESCRIPTION.DESCR is a semantic marker used to mark all words that may form part of a noun group. An essential component of DESCRIPTION is a mechanism to identify the concept underlying the complex nominals (cf. Levi, 1978) . See Finin (1980) for a recent work on complex nominals that concentrates on concept modification. This is of most importance because it is characteristic of declarative contexts that the same concept may be referred to by different complex nominals.For instance, it is not rare to find the following complex nominals in the same programming problem all of them referring to the same concept: "the previous balance," "the starting balance," "the old balance" "the balance at the beginning of the period." DESCRIPTION will return with the same token (old-bal) in all of these cases.The reader may have realized that "the balance at the beginning of the period" is not a compound noun. They are related to compound nouns.In fact many compound nouns have been formed by deletion of prepositions. We have called them prepositional phrases completing a description, and we have treated them as complex nominals. Prepositions:For each preposition (also for each conjunction) there is a procedure. The function of these prepositional experts (cf. Small, 1980) is =o determine the meaning of the preposition. We refer to them as FOR-SP, ON-SP, AS-SP, etc.. Descri~tiue Verbs: (D-VERBS) are those used to describe. We have categorized them in four classes. There are those that describe the constituents of an object. Among them are: consist of, show, include, be ~iven by, contain, etc.. We refer to them as CONSIST-OF D-VERBS. A second class are those used to indicate that something is representing something.Represent, indicate, mean, describe, etc.. belong to this class. We refer to them as REPRESENT D-VERBS. A third class are those that fall under the notion of appear. To this class belong appear, belong, be $iven on etc.. We refer to them as APPEAR D-VERBS. The fourth class are formed by those that express a spatial relation. Some of these are: follow, precede , be followed by any spatial verb. We refer to them as SPATIAL D-VERBS. Action Verbs: We have used different semantic features, which indicate different levels of abstraction, to tag action verbs. Thus we have used the marker SUPL to mark in the dictionary "supply", "provide", "furnish", but not "offer".From the highest level of abstraction all of them are tagged with the marker ATRANS. The procedures that parse the action verbs and the descriptive verbs are called ACTION-VERB and DESCRIPTIVE-VERB respectively.We will comment briefly on the first six sentences of the example in Fig. 2 . We will name each sentence by quoting its beginning and its end. There is a specialist that has grouped the knowledge about checking-accounts.This specialist, whose name is ACCOUNT-SP, will be invoked when the parser finds a concept that belongs to the slot of relevant concepts in the passive frame.The first sentence is: "A bank would like to produce... checking accounts".The OUTPUT-SP is activated by "like".When 0UTPUT-SP is activated by a verb with the feature of REQUEST, there are only two production rules that follow. One that considers that the next concept is an action verb, and another that looks for the pattern <REPORT + CONSIST D-VERB> (where "REPORT" is a semantic feature for "report," "list," etc.).In this case, the first rule is fired. Then ACTION-VERB is activated with the recommendation of invoking the OUTPUT-SUPERVI-SOR each time that an object is parsed. ACTION-VERB awakens the OUTPUT-SUPERVISOR with (RECORDS ABOUT (TRANSACTION)),Because "record" has the feature IGENERIC the OUTPUT-SUPERVISOR tries to redirect the parser by looking for a CONSIST D-VERB. Because the next concept is not a D-VERB, OUTPUT-SUPERVISOR sets RECOG to NIL and returns control to ACTION-VERB.This parses the adverbial phrase introduced by "during" and the prepositional phrase introduced by "with". ACTION-VERB parses the entire sentence without recognizing any relevant concept, except the identification of the frame that was done while processing "a bank".The second sentence "For each account the bank wants ... balance." is parsed in the following way. Although "account" belongs to slot of relevant concepts for this problem, it is skipped because it is in a prepositional phrase that starts a sentence. The 0UTPUT-SP is activated by a REQUEST type verb, "want". STRUCT looks like: (RECIPIENT (ACCOUNT UQ (EACH)) SUBJECT (BANK)). The production rule whose antecedent is <RECORD + CONSIST D-VERB> is fired. The DESCRIPTIVE-VERB function is asked to parse starting in "showing," and activate the OUTPUT-SUPERVISOR each time an object is parsed.The OUTPUT-SUPERVISOR inserts all objects in the CONSIST-OF slot of output, and returns control to the OUTPUT-SP that inserts the RECIPIENT, "account," in the CONSIST-OF slot of output and returns control.The next sentence is "The accounts and transactions ... as follows:" DECLARATIVE asks DESCRIPTION to parse the subject.Because account belongs to the relevant concepts of the passive frame, the ACCOUNT-SP specialist is invoked.There is nothing in STRUCT.When a topic specialist is invoked and the next word is a boolean conjunction, the specialist asks DESCRIPTION to get the next concept for it. If the concept does not belong to the llst of relevant concepts, the specialist sets RECOG to NIL and returns control.Otherwlse it continues examining the sentence.Because transaction belongs to the slot of relevant concepts of the passive frame, ACCOUNT-SP continues in control. ACCOUNT-SP finds "for" and asks DESCRIPTION to parse the nominal phrase. ACCOUNT-SP ignores anything that has the marker HUMAN or TIME. Finally ACCOUNT-SP finds the verb, an APPEAR D-VERB and invokes the DESCRIPTIVE-VERB routine with the recommendation of invoking the ACCOUNT-SUPERVISOR each time a complement is found.The ACCOUNT-SUPERVISOR is awakened with card. This inserts "card" in the INPUT-TYPE slot of account and transaction and returns control to the DESCRIPTIVE-VERB routine.AS-SP (the routine for "as") is invoked next. This, after finding "follows" followed by ":," indicate to DESCRIPTIVE-VERB that the sentence has been parsed.ACCOUNT-SP returns control to DECLARATIVE and this, after checking that QUIT has the value T, returns control to SENTENCE.The next sentence is: "First will be a sequence of cards ... accounts."The INPUT-SP specialist is invoked.STRUCT looks like: (ADV (FIRST) EXIST ). "Sequence of cards" gives the concept card activating the INPUT-SP specialist. The next concept is a REPRESENT D-VERB.INPUT-SP activates the DESCRIPTIVE-VERB routine and asks it to activate the INPUT-SUPERVISOR each time an object is found. The INPUT-SUPERVISOR checks if the object belongs to the relevant concepts for checking accounts.If not, the ACCOUNT-SUPERVISOR will complain.That will be the case if the sentence is: "First will be a sequence of cards describing the students".Assume that the above sentence says: "First will be a sequence of cards consisting of an account number and the old balance."In that case, the INPUT-SP will activate also the INPUT-SUPERVISOR but because the verbal concept is a CONSIST D-VERB, the INPUT-SUPERVISOR will stack the complements in the slot for INPUT. Thus, what the supervisor specialists do depend on the verbal concept and what is coming after.The next sentence is: "Each account is described by ..., in dollars and cents." Again, the ACCOUNT-SP is activated.The next concept is a CONSIST D-VERB.ACCOUNT-SP assumes that it is the input for accounts and activates the DESCRIPTIVE-VERB function, and passes to it the recommendation of activating the INPUT-SUPERVISOR each time an object is parsed.The INPUT-SUPERVI-SOR is awakened with (NUMBERS CARDINAL (2)).Because number is not an individual concept (like, say, 0 is) the INPUT-SUPERVISOR reexamines the sentence and finds ":," it then again asks to DESCRIPTIVE-VERB to parse starting at "the account number...".The INPUT-SUPERVISOR stacks the complements in the input slot of the concept that is being described:account.The next sentence is: "The last account is followed by ... to indicate the end of the list." The ACCOUNT-SP is invoked again.The following production rule is fired:If the ordinal "last" is modifying "account" and the next concept is a SPATIAL D-VERB then activate the END-OF-DATA specialist.This assumes control and asks DESCRIPTIVE-VERB to parse starting at "followed by" with the usual recommendation of awakening the END-OF-DATA supervisor when a complement is found, and the recommendation of ignoring a PURPOSE clause if the concept is end-of-list or end-of-account. The END-OF-DATA is awakened with "dummy-account". Because "dtumny-account" is not an individual concept, the END-OF-DATA supervisor reexamines the sentence expecting that the next concept is a CONSIST D-VERB.It finds it, and redirects the parser by asking the DESCRIPTIVE-VERB to parse starting in "consisting of two zero values." The END-OF-DATA is awakened with "(ZERO CARD (2))". Because this time the object is an individual concept, the END-OF-DATA supervisor inserts it into the END-OF-DATA slot of the concept being described: account. | In text understanding, there are two distinct issues. One has to do with the mapping of individual sentences into some internal representation (syntactic markers, some type of case grammar, Wilks' preference semantics, Schank's conceptual dependency etc.).In designing this mapping, several approaches have been taken.In Winograd (1972) and Marcus (1979) , there is an interplay between syntax, and semantic markers (in that order), while in Wilks (1973) and Riesbeck (1975) the parser rely almost exclusively on semantic categories.A separate issue has to do with the meaning of the internal representation in relation to the understanding of the text.For instance, consider the following text (it belongs to the second example):"A bank would like to produce records of the transactions during an accounting period in connection with their checking accounts. For each account the bank wants a list showing the balance at the beginning of t1~e period, the number of deposits and withdrawals, and the final balance." Assume that we parse these sentences into our favorite internal representation.Now what we do with the internal representation?It is still far distant from its textual meaning.In fact, the first sentence is only introducing the topic of the programming problem.The writer could have achieved the same effect by saying: "The following is a checking account problem".The textual meaning of the second sentence is the description of the output for that problem.The writer could have achieved the same effect by saying that the output for the problem consists of the old-balance, deposits, withdrawals, etc.. One way to produce the textual meaning of the sentence is to interpret the internal representation that has already been built.Of course, that is equivalent to reparsing the sentence.Another way is to map the sentence directly into the final representation or the textual meaning of the sentence.That is the approach we have taken. DeJong (1979) and Schank etal. (1979) are two recent works that move in that direction. DeJong's system, called FRUMP, is a strong form of top down parser.It skims the text looking for those concepts in which it is interested.When it finds all of them, it ignores the remainder of the text. In analogy to key-word parsers, we may describe FRUMP as a key-concept parser.In Schank etal. (1979) , words are marked in the dictionary as skippable or as having high relevance for a given script.When a relevant word is found, some questions are formulated as requests to the parser.These requests guide the parser in the understanding of the story.In our opinion, the criteria by which words are marked as skippable or relevant are not clear.There are significant differences between our ideas and those in the aforementioned works. The least signi£icant o~ them is that the internal representation selected by us has been a type of case grammar, while in those works the sentences are mapped into Schank's conceptual dependency notation.Due to the declarative nature of the texts we have studied, we have not seen a need for a deeper representation of the action verbs. The most important difference lies in the incorporation in our model of Kant's distinction between concepts as a system of rules and concepts as an abstract representation (an epistemic notion that is absent in Schank and his collobarators' work).The inclusion of this distinction in our model makes the role and the organization of the different components that form part of comprehension differ markedly from those in the aforementioned works.The organization that we have proposed appears in Fig. I . Central to the organization are the conceptual specialists.The other components are subordinated to them.FJ.$ure 1 Sys=em Orsanizai::Lon• "ne parser is essentially based on semantic markers and parses a sentence in to a case frame structure. The specialists contain contextual knowledge relevant to each ~pecific topic. This knowledge is 6f inferential type. What we have termed "passive frames" contain what the system remembers of a given topic. At the beginning of the parsing process, the active frames contain nothing. At the end of the process, the meaning of the text will be recorded in them. Everything in these frames, including the name of the slots, are built from scratch by the conceptual specialists.The communication between these elements is as follows. When a text is input to the system, the parser begins to parse the first sentence.In the parser there are mechanisms to recognize the passive frame associated with the text. Once this is done, mechanisms are set on to check if the most recent parsed conceptual constituent of the sentence is a relevant concept. This is done slmply by checking if the concept belongs to the list of relevant concepts in the passive frame. If that is the case the specialist (concept) override the parser. What does this exactly mean? It does not mean that the specialist will help the parser to produce the segmentation of the sentence, in a way similar to Winograd's and Marcus' approaches in which semantic selections help the syntax component of the parser to produce the right segmentation of the sentence. In fact when the specialists take over the segmentation of the sentence stops. That is what "overriding lower linguistic processes" exactly means. The specialist has knowledge to interpret whatever structure the parser has built as well as to make sense directly of the remaining constituents in the rest of the sentence."To interpret" and "make sense directly" means that the constituents of the sentence will be mapped directly into the active frame that the conceptual specialists are building. However this does not mean that the parser will be turned off. The parser continues functioning, not in order to continue with the segmentation of the sentence but to return the remaining of the conceptual constituents of the sentence to the specialist in control when asked by it. Thus what we have called "linguistic knowledge" has been separated from the high level "inferential knowledge" that is dependent on the subject matter of a given topic as well as from the knowledge that is recalled from a given situation.These three different cognitive aspects correspond to what we have called "parser," "conceptual specialists," and "passive frames" respectively.The concepts relevant to a programming topic are grouped in a passive frame. We distinguish between those concepts which are relevant to a specific programming task, like balance to checking-account programs, and those relevant to any kind of program, like output, inRut, end-of-data, etc.. The former can be only recognized when the programming topic has been identified.A concept like output will not only be activated by the word "output" or by a noun group containing that word. The verb "print" will obviously activate that concept. Any verb that has the feature REQUEST, a semantic feature associated with such verbs as "like," "want," "need," etc., will activate also the concept output.Similarly nominal concepts like card and verbal concepts like record, a semantic feature for verbs like "record," "punch," etc. are Just two examples of concepts that will activate the input specialist.The recognition of concepts is as follows: Each time that a new sentence is going to be read, a global variable RECOG is initialized to NIL. Once a nominal or verbal concept in the sentence has been parsed, the function RECOGNIZE-CONCEPT is invoked (if the value of RECOG is NIL). This function checks if the concept that has been parsed is relevant to the progran~ning task in general or (if the topic has been identified) is relevant to the topic of the programming example.If so, RECOGNIZE-CONCEPT sets RECOG to T and passes control to the concept that takes control overriding the parser.Once a concept has been recognized, the specialist for that concept continues in control until the entire sentence has been processed. The relevant concept may be the subject or any other case of the sentence.However if the relevant concept is in a prepositional phrase that starts a sentence, the relevant concept will not take control.The following data structures are used during parsing.A global variable, STRUCT, holds the result of the parsing.STRUCT can be considered as a STM (short term memory) for the low level linguistic processes.A BLACKBOARD (Erman and Lesser, 1975) is used for communication between the high level conceptual specialists and the low level linguistic experts.Because the information in the blackboard does not go beyond the sentential level, it may be considered as STM for the high level sources of knowledge.A global variable WORD holds the word being examined, and WORDSENSE holds the semantic features of that word.An instructor records the name and five test scores on a data card for each student.The registrar also supplies data cards containing a student name, identification number and number of courses passed.The parser is invoked by activating SENTENCE. Because "an" has the marker DESCR, SENTENCE passes control to DECLARATIVE which handles sentences starting with a nominal phrase.(There are other functions that respectively handle sentences starting with a prepositional phrase, an adverbial clause, a co~nand, an -ing form, and sentences introduced by "to be" (there be, will be, etc.) with the meaning of existence.) DECLARATIVE invokes DESCRIPTION.This parses "an instructor" obtaining the concept instructor.Before returning control, DESCRIPTION activates the functions RECOG-NIZE-TOPIC and RECOGNIZE-CONCEPT.The former function checks in the dictionary if there is a frame associated with the concept parsed by DESCRIPTION.The frame EXAM-SCORES is associated with instructor, then the variable TOPIC is instantiated to that frame.The recognition of the frame, which may be a very hard problem, is very simple in the programming problems we have studied and normally the first guess happens to be correct. Next, RECOGNIZE-CONCEPT is invoked. Because instructor does not belong to the relevant concepts of the EXAM-SCORES frame, it returns control. Finally DESCRIPTION returns control to DECLARATIVE, along with a list containing the semantic features of instructor. DECLARATIVE, after checking that the feature TIME does not belong to those features, inserts SUBJECT before "instructor" in STRUCT.Before storing the content of WORD, "records," into STRUCT, DECLARATIVE invokes RECOGNIZE-CONCEPT to recognize the verbal concept.All verbs with the feature record, as we said above, activate the input specialist, called INPUT-SP.When INPUT-SP is activated, STRUCT looks like (SUBJ (INSTUCTOR)). As we said in the introduction, the INPUT specialist is a collection of production rules. One of those rules says:IF the marker RECORD belongs to WORDSENSE then activate the function ACTION-VERB and pass the following recommendations to it: l)activate the INPUT-SUPERVISOR each time you find an object 2) if a RECIPIENT case is found then if it has the feature HVM_AN, parse and ignore it. Otherwise awaken the INPUT-SUPERVISOR 3) if a WHERE case (the object where something is recorded) is found, awaken the INPUT-SUPERVISOR.The INPUT-SUPERVISOR is a function that is controlling the input for each particular problem. ACTION-VERB parses the first object and passes it to the INPUT-SUPERVISOR.This checks if the semantic feature IGENERIC (this is a semantic feature associated with words that refer to generic information like "data," "information," etc.) does not belong to the object that has been parsed by ACTION-VERB.If that is not the case, the INPUT-SUPERVISOR, after checking in the PASSIVE-FRAME that name is normally associated with the input for EXAM-SCORES, inserts it in the CONSIST-OF slot of input.The INPUT-SUPERVISOR returns control to ACTION-VERB that parses the next object and the process explained above is repeated.When ACTION-VERB finds the preposition "on," the routine ON-SP is activated.This, after checking that the main verb of the sentence has been parsed and that it takes a WHERE case, checks the BLACKBOARD to find out if there is a recommendation for it. Because that is the case, ON-SP tells DESCRIPTION to parse the nominal phrase "on data cards". This returns with the concept card. ON-SP activates the INPUT-SUPERVISOR with card. This routine, after checking that cards is a type of input that the solver handles, inserts "card" in the INPUT-TYPE slot of input and returns control. What if the sentence had said "... on a notebook"? Because notebook is not a form of input, the INPUT -~ SUPERVISOR would have not inserted "book" into the INPUT-TYPE slot. Another alternative is to let the INPUT-SUPERVISOR insert it in the INPUT-TYPE slot and let the problem solver make sense out of it. There is an interesting tradeoff between understanding and problem solving in these contexts. The robuster the understander Is~ the weaker the solver may bed and vice versa. The prepositional phrase "for each student" is parsed similarly. ACTION-VERB returns control to INPUT-SP that inserts "instructor" in the SOURCE slot of input. Finally, it sets the variable QUIT to T to indicate to DECLARATIVE that the sentence has been parsed and returns control to it. DECLARATIVE after checking that the variable QUIT has the value T, returns control to SENTENCE. This resets the variables RECOG, QUIT and STRUCT to NIL and begins to examine the next sentence.The calling sequence for the second sentence is identical to that for the first sentence except that the recognition of concepts is different. The passive frame for EXAM-SCORES does not contain anything about "registrar" nor about "supplies". DECLARATIVE has called ACTION-VERB to parse the verbal phrase.This has invoked DESCRIPTION to parse the object "data cards".STRUCT looks like: (SUBJ (REGISTRAR) ADV (ALSO) AV (SUPPLIES) OBJ ). ACTION-VERB is waiting for DESCRIPTION to parse "data cards" to fill the slot of OBJ. DESCRIPTION comes with card from "data cards," and invokes RECOGNIZE-CONCEPT.The specialist INPUT-SP is connected with card and it is again activated. This time the production rule that fires says: If what follows in the sentence is <universal quatifier> + <D-VERB> or simply D-VERB then activate the function DESCRIPTIVE-VERB and pass it the recommendation of activating the INPUT-SUPERVISOR each time a complement is found.The pattern <universal quantifier> + <D-VERB> appears in the antecedent of the production rule because we want the system also to understand: "data cards each containing...".The rest of the sentence is parsed in a similar way to the first sentence.The INPUT-SUPERVISOR returns control to INPUT-SP that stacks "registrar" in the source slot of input. Finally the concept input for this problem looks:INPUT CONSIST-OF (NAME (SCORES CARD (5))) SOURCE (INSTRUCTOR) (NAME ID-NUMBER P-COURSES) SOURCE (REGISTRAR) INPUT-TYPE (CARDS)If none of the concepts of a sentence are recognized -that is the sentence has been parsed and the variable RECOG is NIL -the system prints the sentence followed by a question mark to indicate that it could not make sense of it. That will happen if we take a sentence from a problem about checking~accounts and insert it in the middle of a problem about exam scores.The INPUT-SP and the INPUT-SUPERVISOR are the same specialists. The former overrides and guides the parser'when a concept is initially recognized, the latter plays the same role after the concept has been recognized. The following example illustrates how the INPUT-SUPERVISOR may furthermore override and guide the parser.The registrar also provides cards. Each card contains data including an identification number ...When processing the subject of the second sentence, INPUT-SP is activated.This tells the function DESCRIPTIVE-VERB to parse starting at "contains ..." and to awaken the INPUT-SUPERVISOR when an object is parsed. The first object is "data" that has the marker IGENERIC that tells the INPUT-SUPER-VISOR that "data" can not be the value for the input. The INPUT-SUPERVISOR will examine the next concept looking for a D-VERB.Because that is the case, it will ask the routine DESCRIPTIVE-VERB to parse starting at "including an identification n~mber..."The example below has been taken verbatim from Conway and GriPs (1975) . Some notes about the output for this problem are in order. i) "SPEC" is a semantic feature that stands for specification. If it follows a concept,-it means that the concept is being further specified or described. The semantic feature "SPEC" is followed by a descriptive verb or adjective, and finally it comes the complement of the specification in parentheses. In the only instance in which the descriptive predicate does not follow the word SPEC is in expressions like "the old balance in dollars and cents". Those expressions have been treated as a special construction. 2) All direct objects connected by the conjunction "or" appear enclosed in parentheses. 3) "REPRESENT" is a semantic marker and stands for a REPRESENT D-VERB. 4) Finally "(ZERO CARD 3 Figure 2 | LLULL was running in the Dec 20/20 under UCI Lisp in the Department of Computer Science of the Ohio State University.It has been able to understand ten programming problems taken verbatim from text books.A representative example can be found in Fig. 2 . After the necessary modifications, the system is presently running in a VAXlI/780 under Franz Lisp. We are now in the planning stage of extensively experimenting with the system. We predict that the organization that we have proposed will make relatively simple to add new problem areas. Assume that we want LLULL to understand programming problems about roman numerals, say. We are going to find uses of verbs, prepositions, etc. that our parser will not be able to handle. We will integrate those uses in the parser.On top of that we will build some conceptual specialists that will have inferential knowledge about roman numerals, and a thematic frame that will hold structural knowledge about roman numerals.We are presently following this scheme in the extension of LLULL.In the next few months we expect to fully evaluate our ideas. | Main paper:
concepts, schemata and inferences:
In Kant's Critique of Pure Reason one may find two views of a concept.According to one view, a concept is a system of rules governing the application of a predicate to an object.The rule that tells us whether the predicate "large" applies to the concept Canada is a such rule. The system of rules that allows us to recognize any given instance of the concept Canada constitutes our concept of Canada. According to a second view, Kant considers a concept as an abstract representation (vorstellung) of the properties of an object.This second view of a concept is akin to the notion of concept used in such knowledge representation languages as FRL, KLONE and KIIL.Frames have played dual functions.They have been used as a way to organize the inferences, and also as a structural representation of what is remembered of a given situation.This has caused confusion between two different cognitive aspects: memory and comprehension (see Ortony, 1978) . We think that one of the reasons for this confusion is due to the failure in distinguishing between the two types of concepts (concepts as rules and concepts as a structural representation).We have based our analysis on Kant's distinction in order to separate clearly between the organization of the inferences and the memory aspect.For any given text, a thematic frame contains structural knowledge about what is remembered of a theme. One of the slots in this frame contains a list of the relevant concepts for that theme.Each of these concepts in this list is separately organized as a cluster of production rules. They contain the inferential knowledge that allows the system to interpret the information being presently processed, to anticipate incoming information, and to guide and supervise the parser (see below).In some instances, the conceptual specialists access the knowledge stored in the thematic frame to perform some of these actions.
linguistic knowledge, text understanding and p arsin$:
In text understanding, there are two distinct issues. One has to do with the mapping of individual sentences into some internal representation (syntactic markers, some type of case grammar, Wilks' preference semantics, Schank's conceptual dependency etc.).In designing this mapping, several approaches have been taken.In Winograd (1972) and Marcus (1979) , there is an interplay between syntax, and semantic markers (in that order), while in Wilks (1973) and Riesbeck (1975) the parser rely almost exclusively on semantic categories.A separate issue has to do with the meaning of the internal representation in relation to the understanding of the text.For instance, consider the following text (it belongs to the second example):"A bank would like to produce records of the transactions during an accounting period in connection with their checking accounts. For each account the bank wants a list showing the balance at the beginning of t1~e period, the number of deposits and withdrawals, and the final balance." Assume that we parse these sentences into our favorite internal representation.Now what we do with the internal representation?It is still far distant from its textual meaning.In fact, the first sentence is only introducing the topic of the programming problem.The writer could have achieved the same effect by saying: "The following is a checking account problem".The textual meaning of the second sentence is the description of the output for that problem.The writer could have achieved the same effect by saying that the output for the problem consists of the old-balance, deposits, withdrawals, etc.. One way to produce the textual meaning of the sentence is to interpret the internal representation that has already been built.Of course, that is equivalent to reparsing the sentence.Another way is to map the sentence directly into the final representation or the textual meaning of the sentence.That is the approach we have taken. DeJong (1979) and Schank etal. (1979) are two recent works that move in that direction. DeJong's system, called FRUMP, is a strong form of top down parser.It skims the text looking for those concepts in which it is interested.When it finds all of them, it ignores the remainder of the text. In analogy to key-word parsers, we may describe FRUMP as a key-concept parser.In Schank etal. (1979) , words are marked in the dictionary as skippable or as having high relevance for a given script.When a relevant word is found, some questions are formulated as requests to the parser.These requests guide the parser in the understanding of the story.In our opinion, the criteria by which words are marked as skippable or relevant are not clear.There are significant differences between our ideas and those in the aforementioned works. The least signi£icant o~ them is that the internal representation selected by us has been a type of case grammar, while in those works the sentences are mapped into Schank's conceptual dependency notation.Due to the declarative nature of the texts we have studied, we have not seen a need for a deeper representation of the action verbs. The most important difference lies in the incorporation in our model of Kant's distinction between concepts as a system of rules and concepts as an abstract representation (an epistemic notion that is absent in Schank and his collobarators' work).The inclusion of this distinction in our model makes the role and the organization of the different components that form part of comprehension differ markedly from those in the aforementioned works.
organization and communication between the system components:
The organization that we have proposed appears in Fig. I . Central to the organization are the conceptual specialists.The other components are subordinated to them.FJ.$ure 1 Sys=em Orsanizai::Lon• "ne parser is essentially based on semantic markers and parses a sentence in to a case frame structure. The specialists contain contextual knowledge relevant to each ~pecific topic. This knowledge is 6f inferential type. What we have termed "passive frames" contain what the system remembers of a given topic. At the beginning of the parsing process, the active frames contain nothing. At the end of the process, the meaning of the text will be recorded in them. Everything in these frames, including the name of the slots, are built from scratch by the conceptual specialists.The communication between these elements is as follows. When a text is input to the system, the parser begins to parse the first sentence.In the parser there are mechanisms to recognize the passive frame associated with the text. Once this is done, mechanisms are set on to check if the most recent parsed conceptual constituent of the sentence is a relevant concept. This is done slmply by checking if the concept belongs to the list of relevant concepts in the passive frame. If that is the case the specialist (concept) override the parser. What does this exactly mean? It does not mean that the specialist will help the parser to produce the segmentation of the sentence, in a way similar to Winograd's and Marcus' approaches in which semantic selections help the syntax component of the parser to produce the right segmentation of the sentence. In fact when the specialists take over the segmentation of the sentence stops. That is what "overriding lower linguistic processes" exactly means. The specialist has knowledge to interpret whatever structure the parser has built as well as to make sense directly of the remaining constituents in the rest of the sentence."To interpret" and "make sense directly" means that the constituents of the sentence will be mapped directly into the active frame that the conceptual specialists are building. However this does not mean that the parser will be turned off. The parser continues functioning, not in order to continue with the segmentation of the sentence but to return the remaining of the conceptual constituents of the sentence to the specialist in control when asked by it. Thus what we have called "linguistic knowledge" has been separated from the high level "inferential knowledge" that is dependent on the subject matter of a given topic as well as from the knowledge that is recalled from a given situation.These three different cognitive aspects correspond to what we have called "parser," "conceptual specialists," and "passive frames" respectively.
the parser:
In this section we explain some of the components of the parser so that the reader can follow the discussion of the examples in the next section. We refer the reader to Gomez (1981) for a detailed description of these concepts. Noun Group: The function that parses the noun group is called DESCRIPTION.DESCR is a semantic marker used to mark all words that may form part of a noun group. An essential component of DESCRIPTION is a mechanism to identify the concept underlying the complex nominals (cf. Levi, 1978) . See Finin (1980) for a recent work on complex nominals that concentrates on concept modification. This is of most importance because it is characteristic of declarative contexts that the same concept may be referred to by different complex nominals.For instance, it is not rare to find the following complex nominals in the same programming problem all of them referring to the same concept: "the previous balance," "the starting balance," "the old balance" "the balance at the beginning of the period." DESCRIPTION will return with the same token (old-bal) in all of these cases.The reader may have realized that "the balance at the beginning of the period" is not a compound noun. They are related to compound nouns.In fact many compound nouns have been formed by deletion of prepositions. We have called them prepositional phrases completing a description, and we have treated them as complex nominals. Prepositions:For each preposition (also for each conjunction) there is a procedure. The function of these prepositional experts (cf. Small, 1980) is =o determine the meaning of the preposition. We refer to them as FOR-SP, ON-SP, AS-SP, etc.. Descri~tiue Verbs: (D-VERBS) are those used to describe. We have categorized them in four classes. There are those that describe the constituents of an object. Among them are: consist of, show, include, be ~iven by, contain, etc.. We refer to them as CONSIST-OF D-VERBS. A second class are those used to indicate that something is representing something.Represent, indicate, mean, describe, etc.. belong to this class. We refer to them as REPRESENT D-VERBS. A third class are those that fall under the notion of appear. To this class belong appear, belong, be $iven on etc.. We refer to them as APPEAR D-VERBS. The fourth class are formed by those that express a spatial relation. Some of these are: follow, precede , be followed by any spatial verb. We refer to them as SPATIAL D-VERBS. Action Verbs: We have used different semantic features, which indicate different levels of abstraction, to tag action verbs. Thus we have used the marker SUPL to mark in the dictionary "supply", "provide", "furnish", but not "offer".From the highest level of abstraction all of them are tagged with the marker ATRANS. The procedures that parse the action verbs and the descriptive verbs are called ACTION-VERB and DESCRIPTIVE-VERB respectively.
recognition of c~ ~pts:
The concepts relevant to a programming topic are grouped in a passive frame. We distinguish between those concepts which are relevant to a specific programming task, like balance to checking-account programs, and those relevant to any kind of program, like output, inRut, end-of-data, etc.. The former can be only recognized when the programming topic has been identified.A concept like output will not only be activated by the word "output" or by a noun group containing that word. The verb "print" will obviously activate that concept. Any verb that has the feature REQUEST, a semantic feature associated with such verbs as "like," "want," "need," etc., will activate also the concept output.Similarly nominal concepts like card and verbal concepts like record, a semantic feature for verbs like "record," "punch," etc. are Just two examples of concepts that will activate the input specialist.The recognition of concepts is as follows: Each time that a new sentence is going to be read, a global variable RECOG is initialized to NIL. Once a nominal or verbal concept in the sentence has been parsed, the function RECOGNIZE-CONCEPT is invoked (if the value of RECOG is NIL). This function checks if the concept that has been parsed is relevant to the progran~ning task in general or (if the topic has been identified) is relevant to the topic of the programming example.If so, RECOGNIZE-CONCEPT sets RECOG to T and passes control to the concept that takes control overriding the parser.Once a concept has been recognized, the specialist for that concept continues in control until the entire sentence has been processed. The relevant concept may be the subject or any other case of the sentence.However if the relevant concept is in a prepositional phrase that starts a sentence, the relevant concept will not take control.The following data structures are used during parsing.A global variable, STRUCT, holds the result of the parsing.STRUCT can be considered as a STM (short term memory) for the low level linguistic processes.A BLACKBOARD (Erman and Lesser, 1975) is used for communication between the high level conceptual specialists and the low level linguistic experts.Because the information in the blackboard does not go beyond the sentential level, it may be considered as STM for the high level sources of knowledge.A global variable WORD holds the word being examined, and WORDSENSE holds the semantic features of that word.
example 1:
An instructor records the name and five test scores on a data card for each student.The registrar also supplies data cards containing a student name, identification number and number of courses passed.The parser is invoked by activating SENTENCE. Because "an" has the marker DESCR, SENTENCE passes control to DECLARATIVE which handles sentences starting with a nominal phrase.(There are other functions that respectively handle sentences starting with a prepositional phrase, an adverbial clause, a co~nand, an -ing form, and sentences introduced by "to be" (there be, will be, etc.) with the meaning of existence.) DECLARATIVE invokes DESCRIPTION.This parses "an instructor" obtaining the concept instructor.Before returning control, DESCRIPTION activates the functions RECOG-NIZE-TOPIC and RECOGNIZE-CONCEPT.The former function checks in the dictionary if there is a frame associated with the concept parsed by DESCRIPTION.The frame EXAM-SCORES is associated with instructor, then the variable TOPIC is instantiated to that frame.The recognition of the frame, which may be a very hard problem, is very simple in the programming problems we have studied and normally the first guess happens to be correct. Next, RECOGNIZE-CONCEPT is invoked. Because instructor does not belong to the relevant concepts of the EXAM-SCORES frame, it returns control. Finally DESCRIPTION returns control to DECLARATIVE, along with a list containing the semantic features of instructor. DECLARATIVE, after checking that the feature TIME does not belong to those features, inserts SUBJECT before "instructor" in STRUCT.Before storing the content of WORD, "records," into STRUCT, DECLARATIVE invokes RECOGNIZE-CONCEPT to recognize the verbal concept.All verbs with the feature record, as we said above, activate the input specialist, called INPUT-SP.When INPUT-SP is activated, STRUCT looks like (SUBJ (INSTUCTOR)). As we said in the introduction, the INPUT specialist is a collection of production rules. One of those rules says:IF the marker RECORD belongs to WORDSENSE then activate the function ACTION-VERB and pass the following recommendations to it: l)activate the INPUT-SUPERVISOR each time you find an object 2) if a RECIPIENT case is found then if it has the feature HVM_AN, parse and ignore it. Otherwise awaken the INPUT-SUPERVISOR 3) if a WHERE case (the object where something is recorded) is found, awaken the INPUT-SUPERVISOR.The INPUT-SUPERVISOR is a function that is controlling the input for each particular problem. ACTION-VERB parses the first object and passes it to the INPUT-SUPERVISOR.This checks if the semantic feature IGENERIC (this is a semantic feature associated with words that refer to generic information like "data," "information," etc.) does not belong to the object that has been parsed by ACTION-VERB.If that is not the case, the INPUT-SUPERVISOR, after checking in the PASSIVE-FRAME that name is normally associated with the input for EXAM-SCORES, inserts it in the CONSIST-OF slot of input.The INPUT-SUPERVISOR returns control to ACTION-VERB that parses the next object and the process explained above is repeated.When ACTION-VERB finds the preposition "on," the routine ON-SP is activated.This, after checking that the main verb of the sentence has been parsed and that it takes a WHERE case, checks the BLACKBOARD to find out if there is a recommendation for it. Because that is the case, ON-SP tells DESCRIPTION to parse the nominal phrase "on data cards". This returns with the concept card. ON-SP activates the INPUT-SUPERVISOR with card. This routine, after checking that cards is a type of input that the solver handles, inserts "card" in the INPUT-TYPE slot of input and returns control. What if the sentence had said "... on a notebook"? Because notebook is not a form of input, the INPUT -~ SUPERVISOR would have not inserted "book" into the INPUT-TYPE slot. Another alternative is to let the INPUT-SUPERVISOR insert it in the INPUT-TYPE slot and let the problem solver make sense out of it. There is an interesting tradeoff between understanding and problem solving in these contexts. The robuster the understander Is~ the weaker the solver may bed and vice versa. The prepositional phrase "for each student" is parsed similarly. ACTION-VERB returns control to INPUT-SP that inserts "instructor" in the SOURCE slot of input. Finally, it sets the variable QUIT to T to indicate to DECLARATIVE that the sentence has been parsed and returns control to it. DECLARATIVE after checking that the variable QUIT has the value T, returns control to SENTENCE. This resets the variables RECOG, QUIT and STRUCT to NIL and begins to examine the next sentence.The calling sequence for the second sentence is identical to that for the first sentence except that the recognition of concepts is different. The passive frame for EXAM-SCORES does not contain anything about "registrar" nor about "supplies". DECLARATIVE has called ACTION-VERB to parse the verbal phrase.This has invoked DESCRIPTION to parse the object "data cards".STRUCT looks like: (SUBJ (REGISTRAR) ADV (ALSO) AV (SUPPLIES) OBJ ). ACTION-VERB is waiting for DESCRIPTION to parse "data cards" to fill the slot of OBJ. DESCRIPTION comes with card from "data cards," and invokes RECOGNIZE-CONCEPT.The specialist INPUT-SP is connected with card and it is again activated. This time the production rule that fires says: If what follows in the sentence is <universal quatifier> + <D-VERB> or simply D-VERB then activate the function DESCRIPTIVE-VERB and pass it the recommendation of activating the INPUT-SUPERVISOR each time a complement is found.The pattern <universal quantifier> + <D-VERB> appears in the antecedent of the production rule because we want the system also to understand: "data cards each containing...".The rest of the sentence is parsed in a similar way to the first sentence.The INPUT-SUPERVISOR returns control to INPUT-SP that stacks "registrar" in the source slot of input. Finally the concept input for this problem looks:INPUT CONSIST-OF (NAME (SCORES CARD (5))) SOURCE (INSTRUCTOR) (NAME ID-NUMBER P-COURSES) SOURCE (REGISTRAR) INPUT-TYPE (CARDS)If none of the concepts of a sentence are recognized -that is the sentence has been parsed and the variable RECOG is NIL -the system prints the sentence followed by a question mark to indicate that it could not make sense of it. That will happen if we take a sentence from a problem about checking~accounts and insert it in the middle of a problem about exam scores.The INPUT-SP and the INPUT-SUPERVISOR are the same specialists. The former overrides and guides the parser'when a concept is initially recognized, the latter plays the same role after the concept has been recognized. The following example illustrates how the INPUT-SUPERVISOR may furthermore override and guide the parser.The registrar also provides cards. Each card contains data including an identification number ...When processing the subject of the second sentence, INPUT-SP is activated.This tells the function DESCRIPTIVE-VERB to parse starting at "contains ..." and to awaken the INPUT-SUPERVISOR when an object is parsed. The first object is "data" that has the marker IGENERIC that tells the INPUT-SUPER-VISOR that "data" can not be the value for the input. The INPUT-SUPERVISOR will examine the next concept looking for a D-VERB.Because that is the case, it will ask the routine DESCRIPTIVE-VERB to parse starting at "including an identification n~mber..."
example 2:
We will comment briefly on the first six sentences of the example in Fig. 2 . We will name each sentence by quoting its beginning and its end. There is a specialist that has grouped the knowledge about checking-accounts.This specialist, whose name is ACCOUNT-SP, will be invoked when the parser finds a concept that belongs to the slot of relevant concepts in the passive frame.The first sentence is: "A bank would like to produce... checking accounts".The OUTPUT-SP is activated by "like".When 0UTPUT-SP is activated by a verb with the feature of REQUEST, there are only two production rules that follow. One that considers that the next concept is an action verb, and another that looks for the pattern <REPORT + CONSIST D-VERB> (where "REPORT" is a semantic feature for "report," "list," etc.).In this case, the first rule is fired. Then ACTION-VERB is activated with the recommendation of invoking the OUTPUT-SUPERVI-SOR each time that an object is parsed. ACTION-VERB awakens the OUTPUT-SUPERVISOR with (RECORDS ABOUT (TRANSACTION)),Because "record" has the feature IGENERIC the OUTPUT-SUPERVISOR tries to redirect the parser by looking for a CONSIST D-VERB. Because the next concept is not a D-VERB, OUTPUT-SUPERVISOR sets RECOG to NIL and returns control to ACTION-VERB.This parses the adverbial phrase introduced by "during" and the prepositional phrase introduced by "with". ACTION-VERB parses the entire sentence without recognizing any relevant concept, except the identification of the frame that was done while processing "a bank".The second sentence "For each account the bank wants ... balance." is parsed in the following way. Although "account" belongs to slot of relevant concepts for this problem, it is skipped because it is in a prepositional phrase that starts a sentence. The 0UTPUT-SP is activated by a REQUEST type verb, "want". STRUCT looks like: (RECIPIENT (ACCOUNT UQ (EACH)) SUBJECT (BANK)). The production rule whose antecedent is <RECORD + CONSIST D-VERB> is fired. The DESCRIPTIVE-VERB function is asked to parse starting in "showing," and activate the OUTPUT-SUPERVISOR each time an object is parsed.The OUTPUT-SUPERVISOR inserts all objects in the CONSIST-OF slot of output, and returns control to the OUTPUT-SP that inserts the RECIPIENT, "account," in the CONSIST-OF slot of output and returns control.The next sentence is "The accounts and transactions ... as follows:" DECLARATIVE asks DESCRIPTION to parse the subject.Because account belongs to the relevant concepts of the passive frame, the ACCOUNT-SP specialist is invoked.There is nothing in STRUCT.When a topic specialist is invoked and the next word is a boolean conjunction, the specialist asks DESCRIPTION to get the next concept for it. If the concept does not belong to the llst of relevant concepts, the specialist sets RECOG to NIL and returns control.Otherwlse it continues examining the sentence.Because transaction belongs to the slot of relevant concepts of the passive frame, ACCOUNT-SP continues in control. ACCOUNT-SP finds "for" and asks DESCRIPTION to parse the nominal phrase. ACCOUNT-SP ignores anything that has the marker HUMAN or TIME. Finally ACCOUNT-SP finds the verb, an APPEAR D-VERB and invokes the DESCRIPTIVE-VERB routine with the recommendation of invoking the ACCOUNT-SUPERVISOR each time a complement is found.The ACCOUNT-SUPERVISOR is awakened with card. This inserts "card" in the INPUT-TYPE slot of account and transaction and returns control to the DESCRIPTIVE-VERB routine.AS-SP (the routine for "as") is invoked next. This, after finding "follows" followed by ":," indicate to DESCRIPTIVE-VERB that the sentence has been parsed.ACCOUNT-SP returns control to DECLARATIVE and this, after checking that QUIT has the value T, returns control to SENTENCE.The next sentence is: "First will be a sequence of cards ... accounts."The INPUT-SP specialist is invoked.STRUCT looks like: (ADV (FIRST) EXIST ). "Sequence of cards" gives the concept card activating the INPUT-SP specialist. The next concept is a REPRESENT D-VERB.INPUT-SP activates the DESCRIPTIVE-VERB routine and asks it to activate the INPUT-SUPERVISOR each time an object is found. The INPUT-SUPERVISOR checks if the object belongs to the relevant concepts for checking accounts.If not, the ACCOUNT-SUPERVISOR will complain.That will be the case if the sentence is: "First will be a sequence of cards describing the students".Assume that the above sentence says: "First will be a sequence of cards consisting of an account number and the old balance."In that case, the INPUT-SP will activate also the INPUT-SUPERVISOR but because the verbal concept is a CONSIST D-VERB, the INPUT-SUPERVISOR will stack the complements in the slot for INPUT. Thus, what the supervisor specialists do depend on the verbal concept and what is coming after.The next sentence is: "Each account is described by ..., in dollars and cents." Again, the ACCOUNT-SP is activated.The next concept is a CONSIST D-VERB.ACCOUNT-SP assumes that it is the input for accounts and activates the DESCRIPTIVE-VERB function, and passes to it the recommendation of activating the INPUT-SUPERVISOR each time an object is parsed.The INPUT-SUPERVI-SOR is awakened with (NUMBERS CARDINAL (2)).Because number is not an individual concept (like, say, 0 is) the INPUT-SUPERVISOR reexamines the sentence and finds ":," it then again asks to DESCRIPTIVE-VERB to parse starting at "the account number...".The INPUT-SUPERVISOR stacks the complements in the input slot of the concept that is being described:account.The next sentence is: "The last account is followed by ... to indicate the end of the list." The ACCOUNT-SP is invoked again.The following production rule is fired:If the ordinal "last" is modifying "account" and the next concept is a SPATIAL D-VERB then activate the END-OF-DATA specialist.This assumes control and asks DESCRIPTIVE-VERB to parse starting at "followed by" with the usual recommendation of awakening the END-OF-DATA supervisor when a complement is found, and the recommendation of ignoring a PURPOSE clause if the concept is end-of-list or end-of-account. The END-OF-DATA is awakened with "dummy-account". Because "dtumny-account" is not an individual concept, the END-OF-DATA supervisor reexamines the sentence expecting that the next concept is a CONSIST D-VERB.It finds it, and redirects the parser by asking the DESCRIPTIVE-VERB to parse starting in "consisting of two zero values." The END-OF-DATA is awakened with "(ZERO CARD (2))". Because this time the object is an individual concept, the END-OF-DATA supervisor inserts it into the END-OF-DATA slot of the concept being described: account.
conclusion:
LLULL was running in the Dec 20/20 under UCI Lisp in the Department of Computer Science of the Ohio State University.It has been able to understand ten programming problems taken verbatim from text books.A representative example can be found in Fig. 2 . After the necessary modifications, the system is presently running in a VAXlI/780 under Franz Lisp. We are now in the planning stage of extensively experimenting with the system. We predict that the organization that we have proposed will make relatively simple to add new problem areas. Assume that we want LLULL to understand programming problems about roman numerals, say. We are going to find uses of verbs, prepositions, etc. that our parser will not be able to handle. We will integrate those uses in the parser.On top of that we will build some conceptual specialists that will have inferential knowledge about roman numerals, and a thematic frame that will hold structural knowledge about roman numerals.We are presently following this scheme in the extension of LLULL.In the next few months we expect to fully evaluate our ideas.
i. introduction:
This paper deals with a theory of computer comprehension of descriptive contexts. By "descriptive contexts" I refer to the language of scientific books, text books, this text, etc.. In the distinction performative vs. declarative, descriptive texts clearly fall in the declarative side.Recent work in natural language has dealt with contexts in which the computer understanding depends on the meaning of the action verbs and the human actions (plans, intentions, goals) indicated by them (Schank and Abelson 1977; Grosz 1977; Wilensky 1978; Bruce and Newman 1978) . Also a considerable amount of work has been done in a plan-based theory of task oriented dialogues (Cohen and Perrault 1979; Perrault and Allen 1980; Hobbs and Evans 1980) . This work has had very little bearing on a theory of ~omputer understanding of descriptive contexts.One of the main tenets of the proposed research is that descriptive (or declarative as we prefer to call them) contexts call for different theoretical ideas compared to those proposed for the understanding of human actions, although~ naturally there are aspects that are common. An important characteristic of these contexts is the predominance of descriptive predicates and verbs (verbs such as "contain," "refer," "consist of," etc.) over action verbs.A direct result of this is that the meaning of the sentence does not depend as much on the main verb of the sentence as on the concepts that make it up. Hence meaning representations centered in the main verb of the sentence are futile for these contexts.We have approached the problem of comprehension in these contexts by considering concepts both as active agents that recognize themselves and as an abstract representation of the properties of an object. This aspect of the theory being developed is based on Kant's distinction between concepts as rules (we have called them conceptual specialists) and concepts as an abstract representation (frames, schemata).Comprehension is viewed as a process dependent.on the conceptual specialists (they contain the inferential knowledge), the schemata (they contain structural knowledge), and a parser. The function of the parser is to produce a segmentation of the sentences in a case frame structure, thus determining the meaning of prepositions, polysemous verbs, noun group, etc.. But the function of this parser is not to produce an output to be interpreted by semantic routines, but to start the parsing process and to proceed until a concept relevant to the theme of the text is recognized.Then the concept (a cluster of production rules) takes control of the comprehension process overriding the lower level linguistic processes.The concept continues supervising and guiding the parsing until the sentence has been understood, that is, the meaning of the sentence has been mapped into the final internal representation.Thus a text is parsed directly into the final knowledge structures. Hence comprehension is viewed as a process in which high level sources of knowledge (concepts) override lower level linguistic processes.We have used these ideas to build a system, called LLULL, to unde{stand programming problems taken verbatim from introductory books on programming.
Appendix: The example below has been taken verbatim from Conway and GriPs (1975) . Some notes about the output for this problem are in order. i) "SPEC" is a semantic feature that stands for specification. If it follows a concept,-it means that the concept is being further specified or described. The semantic feature "SPEC" is followed by a descriptive verb or adjective, and finally it comes the complement of the specification in parentheses. In the only instance in which the descriptive predicate does not follow the word SPEC is in expressions like "the old balance in dollars and cents". Those expressions have been treated as a special construction. 2) All direct objects connected by the conjunction "or" appear enclosed in parentheses. 3) "REPRESENT" is a semantic marker and stands for a REPRESENT D-VERB. 4) Finally "(ZERO CARD 3 Figure 2
| null | null | null | null | {
"paperhash": [
"perrault|a_plan-based_analysis_of_indirect_speech_act",
"hobbs|conversation_as_planned_behavior",
"schank|parsing_directly_into_knowledge_structures",
"dejong|prediction_and_substantiation:_a_new_approach_to_natural_language_processing",
"cohen|elements_of_a_plan-based_theory_of_speech_acts",
"marcus|a_theory_of_syntactic_recognition_for_natural_language",
"bruce|interacting_plans",
"altman|a_conceptual_analysis",
"erman|a_multi-level_organization_for_problem_solving_using_many,_diverse,_cooperating_sources_of_knowledge",
"finin|the_semantic_interpretation_of_compound_nominals",
"wilensky|understanding_goal-based_stories",
"grosz|the_representation_and_use_of_focus_in_dialogue_understanding."
],
"title": [
"A Plan-Based Analysis of Indirect Speech Act",
"Conversation as Planned Behavior",
"Parsing Directly into Knowledge Structures",
"Prediction and Substantiation: A New Approach to Natural Language Processing",
"Elements of a Plan-Based Theory of Speech Acts",
"A theory of syntactic recognition for natural language",
"Interacting plans",
"A Conceptual Analysis",
"A Multi-Level Organization For Problem Solving Using Many, Diverse, Cooperating Sources Of Knowledge",
"The semantic interpretation of compound nominals",
"Understanding Goal-Based Stories",
"The representation and use of focus in dialogue understanding."
],
"abstract": [
"We propose an account of indirect forms of speech acts to request and inform based on the hypothesis that language users can recognize actions being performed by others, infer goals being sought, and cooperate in their achievement. This cooperative behaviour is independently motivated and may or may not be intended by speakers. If the hearer believes it is intended, he or she can recognize the speech act as indirect; otherwise it is interpreted directly. Heuristics are suggested to decide among the interpretations.",
"In this paper, planning models developed in artificial intelligence are applied to the kind of planning that must be carried out by participants in a conversation. A planning mechanism is defined, and a short fragment of a free-flowing videotaped conversation is described. The bulk of the paper is then devoted to an attempt to understand the conversation in terms of the planning mechanism. This microanalysis suggests ways in which the planning mechanism must be augmented, and reveals several important conversational phenomena that deserve further investigation.",
"A new type of natural language parser is presented. The idea behind this parser is to map input sentences into the deepest form of the representation of their meaning and make appropriate inferences during the parsing process, using interest to guide the processing.",
"This paper describes a new approach to natural language processing which results in a very robust and efficient system. The approach taken is to integrate the parser with the rest of the system. This enables the parser to benefit from predictions that the rest of the system makes in the course of its processing. These predictions can be invaluable as guides to the parser in such difficult problem areas as resolving referents and selecting meanings of ambiguous words. A program, called FRUMP for Fast Reading Understanding and Memory Program, employs this approach to parsing. FRUMP skims articles rather than reading them for detail. The program works on the relatively unconstrained domain of news articles. It routinely understands stories it has never before seen. The program's success is largely due to its radically different approach to parsing.",
"This paper explores the truism that people think about what they say. It proposes hat, to satisfy their own goals, people often plan their speech acts to affect their listeners' beliefs, goals, and emotional states. Such language use can be modelled by viewing speech acts as operators in a planning system, thus allowing both physical and speech acts to be integrated into plans. \n \nMethodological issues of how speech acts should be defined in a plan-based theory are illustrated by defining operators for requesting and informing. Plans containing those operators are presented and comparisons are drawn with Searle's formulation. The operators are shown to be inadequate since they cannot be composed to form questions (requests to inform) and multiparty requests (requests to request). By refining the operator definitions and by identifying some of the side effects of requesting, compositional adequacy is achieved. The solution leads to a metatheoretical principle for modelling speech acts as planning operators.",
"Abstract : Assume that the syntax of natural language can be parsed by a left-to-right deterministic mechanism without facilities for parallelism or backup. It will be shown that this 'determinism' hypothesis, explored within the context of the grammar of English, leads to a simple mechanism, a grammar interpreter. (Author)",
"The paper presents a notation system for the representation of Interacting plans and applies it in the analysis of a small portion of \"Hansel and Gretel\". The essential problem for the notation system can be stated as follows: How do we represent the plans that determine behavior in a way that explicates Interactions among plans? As the examples Illustrate, the problem is not just to show how cooperation takes place, how conflicts arise and are resolved, how beliefs about plans determine actions, and how differing beliefs and intentions make a story. The system incorporates ideas from work on simple, or non-interacting plans, but the focus is on plans in a social context.",
"This paper presents a ,theoretical analysis of the concept of privacy which emphasizes its role as an interpersonal boundary control process. The paper also analyzes mechanisms and dynamics of privacy, including verbal and paraverbal behavior, personal space, territorial behavior, and culturally based responses. Finally, several functions of privacy are proposed, including regulation of interpersonal interaction, self-other definitional processes, and self-identity. The concept of privacy appears in the literature of several disciplines-psychology, sociology, anthropology, political science, law, architecture, and the design professions. One group of definitions of the term emphasizes seclusion, withdrawal, and avoidance of interaction with others. For example:",
"An organization is presented for implementing solutions to knowledge-based AI problems. The hypothesize-and-test paradigm is used as the basis for cooperation among many diverse and independent knowledge sources (KS's). The KS's are assumed individually to be errorful and incomplete. \n \nA uniform and integrated multi-level structure, the blackboard, holds the current state of the system. Knowledge sources cooperate by creating, accessing, and modifying elements in the blackboard. The activation of a KS is data-driven, based on the occurrence of patterns in the blackboard which match templates specified by the knowledge source. \n \nEach level in the blackboard specifies a different representation of the problem space; the sequence of levels forms a loose hierarchy in which the elements at each level can approximately be described as abstractions of elements at the next lower level. This decomposition can be thought of as an a prion framework of a plan for solving the problem; each level is a generic stage in the plan. \n \nThe elements at each level in the blackboard are hypotheses about some aspect of that level. The internal structure of an hypothesis consists of a fixed set of attributes; this set is the same for hypotheses at all levels of representation in the blackboard. These attributes are selected to serve as mechanisms for implementing the data-directed hypothesize-and-test paradigm and for efficient goal-directed scheduling of KS's. Knowledge sources may create networks of structural relationships among hypotheses. These relationships, which are explicit in the blackboard, serve to represent inferences and deductions made by the KS's about the hypotheses; they also allow competing and overlapping partial solutions to be handled in an integrated manner. \n \nThe Hearsay II speech-understanding system is an implementation of this organization; it is used here as an example for descriptive purposes.",
"This thesis is an investigation of how a computer can be programmed to understand the class of linguistic phenomena loosely referred to as nominal compounds, i.e. sequences of two or more nouns related through modification. Examples of the kinds of nominal compounds dealt with are: 'engine repairs', 'aircraft flight arrival', 'aluminum water pump', and 'noun noun modification'. The interpretation of nominal compounds is divided into three intertwined subproblems: lexical interpretation (mapping words into concepts), modifier parsing (discovering the structure of strings with more than two nominals) and concept modification (assigning an interpretation to the modification or one concept by another). This last problem is the focus of this research. The essential feature of this form of modification is that the underlying semantic relationship which exists between the two concepts is not explicit. Moreover, a large number of relationships might, in principal, exist between the two concepts. The selection of the most appropriate one depends on a host of semantic, pragmatic and contextual factors. As a part of this research, a computer program has been written which builds an appropriate semantic interpretation when given a string of nouns. This program has been designed as one component of the natural language question answering system JETS. The interpretation is done by a set of semantic interpretation rules. Some of the rules are very specific, capturing the meaning of idioms and canned-phrases. Other rules are very general, representing fundamental case-like relationships which can hold between concepts. A strong attempt has been made to handle as much as possible with the more general, highly productive rules. The approach has been built around a frame-based representational system which represents concepts and the relationships between them. The concepts are organized into an abstraction hierarchy which supports inheritance of attributes. The same representational system is used to encode the semantic interpretation rules. An important part of the system is the concept matcher which, given two concepts, determines whether the first describes the second and, if it does, how well.",
"Abstract : Reading requires reasoning. A reader often needs to infer connections between the sentences of a text and must therefore be capable of reasoning about the situations to which the text refers. People can reason about situations because they posses a vast store of knowledge which they can use to infer implicit parts of a situation from those aspects of the situation explicitly described by a text. PAM (Plan Applier Mechanism) is a computer program that understands stories by reasoning about the situations they reference. PAM reads stories in English and produces representations for the stories that include the inferences needed to connect each story's events. To demonstrate that it has understood a story, PAM answers questions about the story and expresses the story from several points of view. PAM reasons about the motives of a story's characters. Many inferences needed for story understanding are concerned with finding explanations for events in the story. PAM has a great deal of knowledge about people's goals which it applies to find explanations for the actions taken by a story's characters in terms of that character's goals and plans.",
"Abstract : This report develops a representation of focus of attention thatcircumscribes discourse contexts within a general representation ofknowledge. Focus of attention is essential to any comprehension processbecause what and how a person understands is strongly influenced bywhere his attention is directed at a given moment. To formalize thenotion of focus, the need for and the use of focus mechanisms areconsidered from the standpoint of building a computer system that canparticipate in a natural language dialogue with a ser, Two ranges offocus, global and immediate, are investigated, and representations forincorporating them in a computer system are developed.The global focus in which an utterance is interpreted is determinedby the total discourse and situational setting of the utterance. Itinfluences what is talked about, how different concepts are introduced,and how concepts are referenced. To encode global focuscomputationally, a representation is developed that highlights thoseitems that are relevant at a given place in a dialogue. The underlyingknowledge representation is segmented into subunits, called focusspaces, that contain those items that are in the focus of attention of adialogue participant during a particular part of the dialogue.Mechanisms are required for updating the focus representation,because, as a dialogue progresses, the objects and actions that arerelevant to the conversation, and therefore in the participants' focusof attention, change. Procedures are described for deciding when andhow to shift focus in task-oriented dialogues, i.e., in dialogues inwhich the participants are cooperating in a shared task. Theseprocedures are guided by a representation of the task being performed.The ability to represent focus of attention in a languageunderstanding system results in a new approach to an important problemin discourse comprehension -- the identification of the referents ofdefinite noun phrases."
],
"authors": [
{
"name": [
"C. Raymond Perrault",
"James F. Allen"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"Jerry R. Hobbs"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"R. Schank",
"Michael Lebowitz",
"L. Birnbaum"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"G. DeJong"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"Philip R. Cohen",
"C. Raymond Perrault"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"Mitchell P. Marcus"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"Bertram C. Bruce",
"Denis Newman"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"I. Altman"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"L. Erman",
"V. Lesser"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"Timothy W. Finin"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"R. Wilensky"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"B. Grosz"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
}
]
}
],
"arxiv_id": [
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null
],
"s2_corpus_id": [
"3069430",
"145566374",
"6693167",
"28841837",
"2166355",
"6616065",
"8569060",
"145075954",
"8524471",
"86774644",
"9899836",
"61114426"
],
"intents": [
[
"background"
],
[
"background"
],
[],
[
"background"
],
[
"background"
],
[],
[
"background"
],
[
"background"
],
[],
[],
[
"background"
],
[
"background"
]
],
"isInfluential": [
false,
false,
false,
false,
false,
false,
false,
false,
false,
false,
false,
false
]
} | - Problem: The paper aims to develop a theory of comprehension of declarative contexts, specifically focusing on the language used in scientific and text books.
- Solution: The hypothesis proposes that comprehension in declarative contexts relies on conceptual specialists, schemata or frames, and a parser, with high-level sources of knowledge (concepts) overriding lower-level linguistic processes in the comprehension process. | 512 | 0.017578 | null | null | null | null | null | null | null | null |
5740407655bd901f553b556a357a5f345d3bce78 | 11326430 | null | Scruffy Text Understanding: Design and Implementation of {`}Tolerant{'} Understanders | Most large text-understanding systems have been designed under the assumption that the input text will be in reasonably "neat" form, e.g., newspaper stories and other edited texts. However, a great deal of natural language texts e.g.~ memos, rough drafts, conversation transcripts~ etc., have features that differ significantly from "neat" texts, posing special problems for readers, such as misspelled words, missing words, poor syntactic constructlon, missing periods, etc. Our solution to these problems is to make use of exoectations, based both on knowledge of surface English and on world knowledge of the situation being described. These syntactic and semantic expectations can be used to figure out unknown words from context, constrain the possible word-senses of words with multiple meanings (ambiguity), fill in missing words (elllpsis), and resolve referents (anaphora). This method of using expectations to aid the understanding of "scruffy" texts has been incorporated into a working computer program called NOMAD, which understands scruffy texts in the domain of Navy messages. | {
"name": [
"Granger, Richard H."
],
"affiliation": [
null
]
} | null | null | 20th Annual Meeting of the Association for Computational Linguistics | 1982-06-01 | 9 | 9 | null | Consider the following (scribbled) message, left by a computer science professor on a colleague's desk:[i] Met w/chrmn agreed on changes to prposl nxt mtg 3 Feb.A good deal of informal text such as everyday messages like the one above are very ill-formed grammatically and contain misspellings, ad hoc abbreviations and lack of important punctuation such as periods between sentences. Yet people seem to easily understand such messages, and in fact most people would probably understand the above message just as readily as they would a more '~ell-formed" version:"I met with the chairman, and we agreed on what changes had to be made to the proposal. Our next meeting will be on Feb. 3."This research was supported in part by the Naval Ocean Systems Center under contract N-00123-81-C-I078.No extra information seems to be conveyed by this longer version, and message-writers appear to take advantage of this fact by writing extremely terse messages such as [I] , apparently counting on readers" ability to analyze them in spite of their messiness.If "scruffy" messages such as this one were only intended for a readership of one, there wouldn't be a real problem. However, this informal type of "memo" message is commonly used for information transfer in many businesses, universities, government offices, etc. An extreme case of such an organization is the Navy, which every hour receives many thousands of short messages, each of which must be encoded into computer-readable form for entry into a database. Currently, these messages come in in very scruffy form, and a growing number of man-hours is spent on the encoding-byhand process. Hence there is an obvious benefit to partially automating this encoding process. The problem is that most existing text-understanding systems (e.g.ELI [Riesbeck and Schank 76] , SAM [Cullingford 77 ], FRUMP [DeJong 79], IPP [Lebowitz 80] ) would fai£ to successfully analyze ill-formed texts like [i], because they have been designed under the assumption that they will receive "heater" input, e.g., edited input such as is found in newspapers or books. This paper briefly outlines some of the properties of texts like [i] , that allow readers to unaerstand it in spite of its scruffiness; and some of the knowledge and mechanisms that seem to underlle readers" ability to understand such texts. A text-processing system called NOMAD is discussed which makes use of the theories described here to process scruffy text in the domain of everyday Navy messages. NOMAD's operation is based on the use of expectations during understanding, based both on knowledge of surface English and on world knowledge of the s~tuation being described. These syntactic and semantic expectations can be used to aid naturally in the solution of a wide range of problems that arise in understanding both "scruffy" texts and pre-edited texts, such as figuring out unknown words from context, constraining the possible word-senses of words with multiple meanings (ambiguity), filling in missing words (ellipsis), and resolving unknown referents (anaphora). [Granger 1977 ] was the first program that could figure out meanings of unknown words encountered during text understanding. FOUL-UP was an attempt to model the corresponding human ability commonly known as "f~guring out a word from context". FOUL-UP worked with the SAM system [Cullingford 1977 ], using the expectations generated by scripts [Schank and Abelson 19771 to restrict the possible meanings of a word, based on what object or action would have occurred in that position according to the script for the story.For instance, consider the following excerpt from a newspaper report of a car accident:[2] Friday, a car swerved off Route 69.The vehicle struck an embankment.The word "embankment" was unknown to the SAM system, but it had encoded predictions about certain attributes of the expected conceptual object of the PROPEL action (the object that the vehicle struck); namely, that it would be a physical object, and would function as an "obstruction" in the vehicle-accident script. (In addition, the conceptual analyzer (ELI - [Riesbeck and Schank 1976] ) had the expectation that the word in that sentence position would be a noun.)Hence, when the unknown word was encountered, FOUL-UP would make use of those expected attributes to construct a memory entry for the word "embankment", indicating that it was a noun, a physical object, and an "obstruction" in vehicleaccident situatlons.It would then create a dictionary definition that the system would use from then on whenever the word was encountered in this context.NOMAD incorporates ideas from, and builds on, earlier work on conceptual analysis (e.g., [Riesbeck and Schank 1976] , [Birnbaum and Selfridge 1979] ); situation and intention inference (e.g., [Cullingford 1977|, [Wilensky 1978 ; and English generatlon (e.g. [Goldman 1973 ], [McGuire 1980 ). What differentiates NOMAD significantly from its predecessors are its error recognition and error correction abilities, which enable it to read texts more complex than those that can be handled by other text understanding systems.We have so far identified the following five types of problems that occur often in scruffy unedited texts. Each problem is illustrated by an example from the domain of Navy messages. The next section will then describe how NOMAD deals with each type of error. Kashin.--"returned" in the sense of re-tal~ation after a previous attack, or "returned" in the sense of "peaceably delivered to"?) When these problems arise in a message, NOMAD must first recognize what the problem is (which is often difficult to do), and then attempt t~ ~orrect the error.These two processes are briefly described in the fnllowing sections.For each of the above examples of problems encountered, NOMAD's method of recognizing and correcting the problem are described here, along with actual English input and output from NOMAD.ENEMY SCUDDED BOMBS AT US.Problem: Unknown word. The unknown word "scudded" is trivial to recognize, since it is the only word without a dictionary entry. Once it has been recognized, NOMAD checks it to see if it could be (a) a misspelllng, (b) an abbreviation or (c) a regular verb-tense of some known word. Solution: Use expectations to figure out word meaning from context. When the spelling checkers fail, a FOUL-UP mechanisms is called which uses knowledge of what actions can be done by an enemy actor, to a weapon object, directed at us. It infers that the action is probably a propel. Again, this is only an educated guess by the system, and may have to be corrected later on the basis of future information.An enemy ship fired bombs at our ship.Problem: Missing sentence boundaries. NOMAD has no expectations for a new verb ("opened") to appear immediately after the completed clause "locked on". It tries but fails to connect "opened" to the phrase "locked on". Solution: Assume the syntactic expectations failed because a clause boundary was not adequately marked in the message; assume such a boundary is there. NOMAD assumes that there may have been an intended sentence separation before "opened", since no expectations can account for the word in this sen-tence position. Hence, NOMAD saves "locked on" as one sentence, and continues to process the rest of the text as a new sentence.We aimed at an unknown object. object.We fired at the | null | null | But even if the SAM system had known the word "embankment", it would not have been able to handle a less edited version of the story, such as this:[3] Vehcle act Rt69; car strck embankment; drivr dead one psngr in,; ser dmg to car full rpt frtncmng.While human readers would have little difficulty understanding this text, no existing computer programs could do so.The scope of this problem is wide; examples of texts that present "scruffy" difficulties to readers are completely unedited texts, such as messages composed in a hurry, with little or no re-writlng, rough drafts, memos, transcripts of conversatzons, etc. Such texts may contain missing words, ad hoc abbreviations of words, poor syntax, confusing order of presentation of ideas, mis-spellzngs, lack of punctuation, etc.Even edited texts such as newspaper stories often contain misspellzngs, words unknown to the reader, and ambiguities;and even apparently very simple texts may contain alternative possible interpretations, which can cause a reader to construct erroneous initial inferences that must later be corrected (see [Granger 1980 [Granger ,1981 ).The following sections describe the NOMAD system, which incorporates FOUL-UP's abilities as well as significantly extended abilities to use syntactic and semantic expectations to resolve these difficulties, in the context of Naval messages.MIDWAY SIGHTED ENEMY. FIRED.Problem: Missing subject and objects. "Fired" builds a PROPEL, and expects a subject and objects to play the conceptual roles of ACTOR (who did the PROPELing), OBJECT (what got PROPELed) and RECIPI-ENT (who got PROPELed at).However, no surface subjects or objects are presented here. Solution: Use expectations to fill in conceptual cases.NOMAD uses situational expectations from the known typical sequence of events in an "ATTACK" (which consists of a movement (PTRANS), a sighting (ATTEND) and firing (PROPEL)). Those expectations say (among other things) that the actor and recipient of the PROPEL will be the same as the actor and direction of the ATTEND, and that the OBJECT that got PROPELed will be some kind of projectile, which is not further specified here.We sighted an enemy ship. We fired at the ship.LOST CONTACT ON ENEMY SHIP.Problem: Missing event in event sequence. NOMAD"s knowledge of the "Tracking" situation cannot understand a ship losing contact until some contact has been gained. Solution: Use situational expectations to infer missing events. NOMAD assumes that the message implies the previous event of gaining contact with the enemy ship, based on the known sequence of events in the "Tracking" situation.We sighted an enemy ship. Then we lost radar visual contact with the ship. Prob!em: Ambiguous interpretation of action. NOMAD cannot tell whether the action here is "returning" fire to the enemy, i.e., firing back at them (after they presumably had fired at us), or peaceably delivering bombs, with no firing implied. Solution: Use expectations of probable goals of actors.NOMAD first interprets the sentence as "peaceably delivering" some bombs to the ship. However, NOMAD contains the knowledge that enemies do not give weapons, information, personnel, etc., to each other.Hence it attempts to find an alternative interpretation of the sentence, in this case finding the "returned fire" interpretation, which does not violate any of NOMAD's knowledge about goals.It then infers, as in the above example, that the enemy ship must have previously fired on us.An unknown enemy ship fired on us. bombs at them.Then we firedThe ability to understand text is dependent on the ability to understand what is being described in the text. Hence, a reader of, say, English text must have applicable knowledge of both the situations that may be described in texts (e.g., actions, states, sequences of events, goals, methods of achieving goals, etc.) and the the surface structures that appear in the language, i.e., the relatlons between the surface order of appearance of words and phrases, and their correspondin~ meaning structures.The process of text understanding is the combined applicatlon of these knowledge sources as a reader proceeds through a text. This fact becomes clearest when we investigate the understanding of texts that present particular problems to a reader. Human understanding is inherently tolerant; people are naturally able to ignore many types of errors, omissions, poor constructions, etc., and get straight to the meaning of the text.Our theories have tried to take this ability into account by including knowledge and mechanisms of error noticing and correcting as implicit parts of our process models of language understanding. The NOMAD system is the latest in a line of "tolerant" language understanders, beginning with FOUL-UP, all based on the use of knowledge of syntax, semantics and pragmatics at all stages of the understanding process to cope with errors. | null | Main paper:
introduction:
Consider the following (scribbled) message, left by a computer science professor on a colleague's desk:[i] Met w/chrmn agreed on changes to prposl nxt mtg 3 Feb.A good deal of informal text such as everyday messages like the one above are very ill-formed grammatically and contain misspellings, ad hoc abbreviations and lack of important punctuation such as periods between sentences. Yet people seem to easily understand such messages, and in fact most people would probably understand the above message just as readily as they would a more '~ell-formed" version:"I met with the chairman, and we agreed on what changes had to be made to the proposal. Our next meeting will be on Feb. 3."This research was supported in part by the Naval Ocean Systems Center under contract N-00123-81-C-I078.No extra information seems to be conveyed by this longer version, and message-writers appear to take advantage of this fact by writing extremely terse messages such as [I] , apparently counting on readers" ability to analyze them in spite of their messiness.If "scruffy" messages such as this one were only intended for a readership of one, there wouldn't be a real problem. However, this informal type of "memo" message is commonly used for information transfer in many businesses, universities, government offices, etc. An extreme case of such an organization is the Navy, which every hour receives many thousands of short messages, each of which must be encoded into computer-readable form for entry into a database. Currently, these messages come in in very scruffy form, and a growing number of man-hours is spent on the encoding-byhand process. Hence there is an obvious benefit to partially automating this encoding process. The problem is that most existing text-understanding systems (e.g.ELI [Riesbeck and Schank 76] , SAM [Cullingford 77 ], FRUMP [DeJong 79], IPP [Lebowitz 80] ) would fai£ to successfully analyze ill-formed texts like [i], because they have been designed under the assumption that they will receive "heater" input, e.g., edited input such as is found in newspapers or books. This paper briefly outlines some of the properties of texts like [i] , that allow readers to unaerstand it in spite of its scruffiness; and some of the knowledge and mechanisms that seem to underlle readers" ability to understand such texts. A text-processing system called NOMAD is discussed which makes use of the theories described here to process scruffy text in the domain of everyday Navy messages. NOMAD's operation is based on the use of expectations during understanding, based both on knowledge of surface English and on world knowledge of the s~tuation being described. These syntactic and semantic expectations can be used to aid naturally in the solution of a wide range of problems that arise in understanding both "scruffy" texts and pre-edited texts, such as figuring out unknown words from context, constraining the possible word-senses of words with multiple meanings (ambiguity), filling in missing words (ellipsis), and resolving unknown referents (anaphora). [Granger 1977 ] was the first program that could figure out meanings of unknown words encountered during text understanding. FOUL-UP was an attempt to model the corresponding human ability commonly known as "f~guring out a word from context". FOUL-UP worked with the SAM system [Cullingford 1977 ], using the expectations generated by scripts [Schank and Abelson 19771 to restrict the possible meanings of a word, based on what object or action would have occurred in that position according to the script for the story.For instance, consider the following excerpt from a newspaper report of a car accident:[2] Friday, a car swerved off Route 69.The vehicle struck an embankment.The word "embankment" was unknown to the SAM system, but it had encoded predictions about certain attributes of the expected conceptual object of the PROPEL action (the object that the vehicle struck); namely, that it would be a physical object, and would function as an "obstruction" in the vehicle-accident script. (In addition, the conceptual analyzer (ELI - [Riesbeck and Schank 1976] ) had the expectation that the word in that sentence position would be a noun.)Hence, when the unknown word was encountered, FOUL-UP would make use of those expected attributes to construct a memory entry for the word "embankment", indicating that it was a noun, a physical object, and an "obstruction" in vehicleaccident situatlons.It would then create a dictionary definition that the system would use from then on whenever the word was encountered in this context.
blame assignment in the nomad system:
But even if the SAM system had known the word "embankment", it would not have been able to handle a less edited version of the story, such as this:[3] Vehcle act Rt69; car strck embankment; drivr dead one psngr in,; ser dmg to car full rpt frtncmng.While human readers would have little difficulty understanding this text, no existing computer programs could do so.The scope of this problem is wide; examples of texts that present "scruffy" difficulties to readers are completely unedited texts, such as messages composed in a hurry, with little or no re-writlng, rough drafts, memos, transcripts of conversatzons, etc. Such texts may contain missing words, ad hoc abbreviations of words, poor syntax, confusing order of presentation of ideas, mis-spellzngs, lack of punctuation, etc.Even edited texts such as newspaper stories often contain misspellzngs, words unknown to the reader, and ambiguities;and even apparently very simple texts may contain alternative possible interpretations, which can cause a reader to construct erroneous initial inferences that must later be corrected (see [Granger 1980 [Granger ,1981 ).The following sections describe the NOMAD system, which incorporates FOUL-UP's abilities as well as significantly extended abilities to use syntactic and semantic expectations to resolve these difficulties, in the context of Naval messages.MIDWAY SIGHTED ENEMY. FIRED.Problem: Missing subject and objects. "Fired" builds a PROPEL, and expects a subject and objects to play the conceptual roles of ACTOR (who did the PROPELing), OBJECT (what got PROPELed) and RECIPI-ENT (who got PROPELed at).However, no surface subjects or objects are presented here. Solution: Use expectations to fill in conceptual cases.NOMAD uses situational expectations from the known typical sequence of events in an "ATTACK" (which consists of a movement (PTRANS), a sighting (ATTEND) and firing (PROPEL)). Those expectations say (among other things) that the actor and recipient of the PROPEL will be the same as the actor and direction of the ATTEND, and that the OBJECT that got PROPELed will be some kind of projectile, which is not further specified here.We sighted an enemy ship. We fired at the ship.
introduction:
NOMAD incorporates ideas from, and builds on, earlier work on conceptual analysis (e.g., [Riesbeck and Schank 1976] , [Birnbaum and Selfridge 1979] ); situation and intention inference (e.g., [Cullingford 1977|, [Wilensky 1978 ; and English generatlon (e.g. [Goldman 1973 ], [McGuire 1980 ). What differentiates NOMAD significantly from its predecessors are its error recognition and error correction abilities, which enable it to read texts more complex than those that can be handled by other text understanding systems.We have so far identified the following five types of problems that occur often in scruffy unedited texts. Each problem is illustrated by an example from the domain of Navy messages. The next section will then describe how NOMAD deals with each type of error. Kashin.--"returned" in the sense of re-tal~ation after a previous attack, or "returned" in the sense of "peaceably delivered to"?) When these problems arise in a message, NOMAD must first recognize what the problem is (which is often difficult to do), and then attempt t~ ~orrect the error.These two processes are briefly described in the fnllowing sections.For each of the above examples of problems encountered, NOMAD's method of recognizing and correcting the problem are described here, along with actual English input and output from NOMAD.ENEMY SCUDDED BOMBS AT US.Problem: Unknown word. The unknown word "scudded" is trivial to recognize, since it is the only word without a dictionary entry. Once it has been recognized, NOMAD checks it to see if it could be (a) a misspelllng, (b) an abbreviation or (c) a regular verb-tense of some known word. Solution: Use expectations to figure out word meaning from context. When the spelling checkers fail, a FOUL-UP mechanisms is called which uses knowledge of what actions can be done by an enemy actor, to a weapon object, directed at us. It infers that the action is probably a propel. Again, this is only an educated guess by the system, and may have to be corrected later on the basis of future information.An enemy ship fired bombs at our ship.Problem: Missing sentence boundaries. NOMAD has no expectations for a new verb ("opened") to appear immediately after the completed clause "locked on". It tries but fails to connect "opened" to the phrase "locked on". Solution: Assume the syntactic expectations failed because a clause boundary was not adequately marked in the message; assume such a boundary is there. NOMAD assumes that there may have been an intended sentence separation before "opened", since no expectations can account for the word in this sen-tence position. Hence, NOMAD saves "locked on" as one sentence, and continues to process the rest of the text as a new sentence.We aimed at an unknown object. object.We fired at the
input::
LOST CONTACT ON ENEMY SHIP.Problem: Missing event in event sequence. NOMAD"s knowledge of the "Tracking" situation cannot understand a ship losing contact until some contact has been gained. Solution: Use situational expectations to infer missing events. NOMAD assumes that the message implies the previous event of gaining contact with the enemy ship, based on the known sequence of events in the "Tracking" situation.We sighted an enemy ship. Then we lost radar visual contact with the ship. Prob!em: Ambiguous interpretation of action. NOMAD cannot tell whether the action here is "returning" fire to the enemy, i.e., firing back at them (after they presumably had fired at us), or peaceably delivering bombs, with no firing implied. Solution: Use expectations of probable goals of actors.NOMAD first interprets the sentence as "peaceably delivering" some bombs to the ship. However, NOMAD contains the knowledge that enemies do not give weapons, information, personnel, etc., to each other.Hence it attempts to find an alternative interpretation of the sentence, in this case finding the "returned fire" interpretation, which does not violate any of NOMAD's knowledge about goals.It then infers, as in the above example, that the enemy ship must have previously fired on us.An unknown enemy ship fired on us. bombs at them.Then we firedThe ability to understand text is dependent on the ability to understand what is being described in the text. Hence, a reader of, say, English text must have applicable knowledge of both the situations that may be described in texts (e.g., actions, states, sequences of events, goals, methods of achieving goals, etc.) and the the surface structures that appear in the language, i.e., the relatlons between the surface order of appearance of words and phrases, and their correspondin~ meaning structures.The process of text understanding is the combined applicatlon of these knowledge sources as a reader proceeds through a text. This fact becomes clearest when we investigate the understanding of texts that present particular problems to a reader. Human understanding is inherently tolerant; people are naturally able to ignore many types of errors, omissions, poor constructions, etc., and get straight to the meaning of the text.Our theories have tried to take this ability into account by including knowledge and mechanisms of error noticing and correcting as implicit parts of our process models of language understanding. The NOMAD system is the latest in a line of "tolerant" language understanders, beginning with FOUL-UP, all based on the use of knowledge of syntax, semantics and pragmatics at all stages of the understanding process to cope with errors.
Appendix:
| null | null | null | null | {
"paperhash": [
"granger|directing_and_re-directing_inference_pursuit:_extra-textual_influences_on_text_interpretation",
"granger|when_expectation_fails:_towards_a_self-correcting_inference_system",
"granger|foul-up:_a_program_that_figures_out_meanings_of_words_from_context",
"lebowitz|generalization_and_memory_in_an_integrated_understanding_system"
],
"title": [
"Directing And Re-Directing Inference Pursuit: Extra-Textual Influences on Text Interpretation",
"When Expectation Fails: Towards a Self-Correcting Inference System",
"FOUL-UP: A Program that Figures Out Meanings of Words from Context",
"Generalization and memory in an integrated understanding system"
],
"abstract": [
"Understanding a text depends on s reader's ability to construct a coherent interpretation that accounts for the statements in the text. However, a given text does not always imply a unique coherent interpretation. In particular, readers can be steered away from an otherwise plausible explanation for a story by such extra-textual factors as the source of the text, the reading purpose, interruptions during reading, or repeated re-questioning of the reader. Some of these effects have been observed in experiments in cognitive psychology (e.g., Black [1980]). This paper presents a computer program called MACARTHUR that can vary both the depth and direction of its inference pursuit in response to re-questioning, resulting in a series of markedly different interpretations of the same text.",
"Contextual understanding depends on a reader's ability to correctly infer a context within which to interpret the events in a story. This \"context-selection problem\" has traditionally been expressed in terms of heuristics for making the correct initial selection of a story context. This paper presents a view of context selection as an ongoing process spread throughout the understanding process. This view requires that the understander be capable of recognizing and correcting erroneous initial context inferences. A computer program called ARTHUR is described, which selects the correct context for a story by dynamically re-evaluating its own initial inferences in light of subsequent information in a story.",
"The inferencing task of figuring out words from context is implemented in the presence of a large database of world knowledge. The program does not require interaction with the user, but rather uses internal parser expectations and knowledge embodied in scripts to figure out likely definitions for unknown words, and to create context-specific definitions for such words.",
"Abstract : Generalization and memory are part of natural language understanding. As people read stories describing various situations they are able to recall similar episodes form memory and use them as a basis to form generalizations about the way such situations normally occur. This thesis describes an integrated system for language understanding IPP (Integrated Partial Parser), that encompasses the ability to generalize and record information in long-term memory as well as conceptual analysis. IPP is a program that learns about the world by reading stories taken from newspapers and th UPI news wire, adding information from these stories to memory, and making generalizations that describe specific situations. It uses the generalizations that it has made to help in understanding future stories. As it reads stories, IPP adds them to its permanent memory. If it locates similar stories in memory as it does this, then it attempts to make generalizations that describe the similarities among the events. Such generalizations form the basis for organizing events in memory and understanding later stories. IPP also includes a procedure for confirming generalizations as further stories are read. In order to analyze the text that it reads, IPP makes extensive use of top-down, predictive processing. As it processes a story, IPP accesses memory in an attempt to identify generalizations describing stereotypical situations that can provide predictions to be used in understanding. Such use of memory to provide top-down context results in a robust and efficient understanding system. (Author)"
],
"authors": [
{
"name": [
"R. Granger"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"R. Granger"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"R. Granger"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"Michael Lebowitz"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
}
]
}
],
"arxiv_id": [
null,
null,
null,
null
],
"s2_corpus_id": [
"15903284",
"16590428",
"9255668",
"60891284"
],
"intents": [
[],
[],
[],
[]
],
"isInfluential": [
false,
false,
true,
false
]
} | null | 512 | 0.017578 | null | null | null | null | null | null | null | null |
e7245b607c40fa6ff67306c0d7d0d99cecb64dbe | 9578469 | null | What{'}s in a Semantic Network? | Ever since Woods's "What's in a Link" paper, there has been a growing concern for formalization in the study of knowledge representation. Several arguments have been made that frame representation languages and semantic-network languages are syntactic variants of the ftrst-order predicate calculus (FOPC). The typical argument proceeds by showing how any given frame or network representation can be mapped to a logically isomorphic FOPC representation. For the past two years we have been studying the formalization of knowledge retrievers as well as the representation languages that they operate on. This paper presents a representation language in the notation of FOPC whose form facilitates the design of a semantic-network-like retriever. | {
"name": [
"Allen, James F. and",
"Frisch, Alan M."
],
"affiliation": [
null,
null
]
} | null | null | 20th Annual Meeting of the Association for Computational Linguistics | 1982-06-01 | 27 | 44 | null | We are engaged in a long-term project to construct a system that can partake in extended English dialogues on some reasonably well specified range of topics. A major part of this effort so far has been the specification of a knowledge representation. Because of the wide range of issues that we are trying to capture, which includes the representation of plans, actions, time, and individuals' beliefs and intentions, it is crucial to work within a framework general enough to accommodate each issue. Thus, we began developing our representation within the first-order predicate calculus. So far, this has presented no problems, and we aim to continue within this framework until some problem forces us to do otherwise.Given this framework, we need to be able to build reasonably efficient systems for use in the project. In particular, the knowledge representation must be able to support the natural language understanding task. This requires that certain forms of inference must be made. ~' Within a general theorem-proving framework, however, those inferences desired would be lost within a wide range of undesired inferences. Thus we have spent considerable effort in constructing a specialized inference component that can support the language understanding task.Before such a component could be built, we needed to identify what inferences were desired. Not surprisingly, much of the behavior we desire can be found within existing semantic network systems used for natural language understanding. Thus the question "What inferences do we need?" can be answered by answering the question "What's in a semantic network?" Ever since Woods's [1975] "What's in a Link" paper, there has been a growing concern for formalization in the study of knowledge representation. Several arguments have been made that frame representation languages and semantic-network languages are syntactic variants of the f~st-order predicate calculus (FOPC). The typical argument (e.g., [Hayes, 1979; Nilsson, 1980; Charniak, 1981a] ) proceeds by showing how any given frame or network representation can be mapped to a logically isomorphic (i.e., logically equivalent when the mapping between the two notations is accounted for) FOPC representation. We emphasize the term "logically isomorphic" because these arguments have primarily dealt with the content (semantics) of the representations rather than their forms (syntax). Though these arguments are valid and scientifically important, they do not answer our question.Semantic networks not only represent information but facilitate the retrieval of relevant facts. For instance, all the facts about the object JOHN are stored with a pointer directly to one node representing JOHN (e.g., see the papers in [Findler, 1979] ). Another example concerns the inheritance of properties. Given a fact such as "All canaries are yellow," most network systems would automatically conclude that "Tweety is yellow," given that Tweety is a canary. This is typically implemented within the network matcher or retriever.We have demonstrated elsewhere [Frisch and Allen, 1982] the utility of viewing a knowledge retriever as a specialized inference engine (theorem prover). A specialized inference engine is tailored to treat certain predicate, function, and constant symbols differently than others. This is done by building into the inference engine certain true sentences involving these symbols and the control needed to handle with these sentences. The inference engine must also be able to recognize when it is able to use its specialized machinery. That is, its specialized knowledge must be coupled to the form of the situations that it can deal with.For illustration, consider an instance of the ubiquitous type hierarchies of semantic networks:FORDS I subtype MUSTANGS l type OLD-BLACKBy mapping the types AUTOS and MUSTANGS to be predicates which are true only of automobiles and mustangs respectively, the following two FOPC sentences are logically isomorphic to the network:(1.1) V x MUSTANGS(x) --) FORDS(x) (1.2) MUSTANGS(OLD-BLACK1)However, these two sentences have not captured the form of the network, and furthermore, not doing so is problematic to the design of a retriever. The subtype and type links have been built into the network language because the network retriever has been built to handle them specially. That is, the retriever does not view a subtype link as an arbitrary implication such as (1.1) and it does not view a type link as an arbitrary atomic sentence such as (1.2).In our representation language we capture the form as wetl as the content of the network. By introducing two predicates, TYPE and SUBTYPE, we capture the meaning of the type and subtype links. TYPE(~O is true iff the individual i is a member of the type (set of objects) t, and SUBTYPE(tl, t 2) is true iff the type t I is a subtype (subset) of the type t 2. Thus, in our language, the following two sentences would be used to represent what was intended by the network:(2.1) SUBTYPE(FORDS,MUSTANGS) (2.2) TYPE(OLD-BLACK1,FORDS)It is now easy to build a retriever that recognizes subtype and type assertions by matching predicate names. Contrast this to the case where the representation language used (1.1) and (1.2) and the retriever would have to recognize these as sentences to be handled in a special manner.But what must the retriever know about the SUBTYPE and TYPE predicates in order that it can reason (make inferences) with them? There are two assertions, (A.1) and (A.2), such that {(1.1),(1.2)} is logically isomorphic to {(2.1),(2.2),(A.1),(A.2)}. (Note: throughout this paper, axioms that define the retriever's capabilities will be referred to as built-in axioms and specially labeled A.1, A.2, etc.)(A.1) v tl,t2,t 3 SUBTYPE(tl,t2) A SUBTYPE(t2,t3) --, SUBTYPE(tl,t3) (SUBTYPE is transitive.) (A.2) v O,tl,t 2 TYPE(o,tl) A SUBTYPE(tl,t2) TYPE(o,t2)(Every member of a given type is a member of its supertypes.)The retriever will also need to know how to control inferences with these axioms, but this issue is considered only briefly in this paper.The design of a semantic-network language often continues by introducing new kinds of nodes and links into the language. This process may terminate with a fixed set of node and link types that are the knowledgestructuring primitives out of which all representations are built. Others have referred to these knowledgestructuring primitives as epistemological primitives [Brachman, 1979] , structural relations [Shapiro, 1979] , and system relations [Shapiro, 1971] . If a fLxed set of knowledge-structuring primitives is used in the language, then a retriever can be built that knows how to deal with all of them.The design of our representation language very much mimics this approach. Our knowledge-structuring primitives include a fixed set of predicate names and terms denoting three kinds of elements in the domain. We give meaning to these primitives by writing domainindependent axioms involving them. Thus far in this paper we have introduced two predicates (TYPE and SUBTYPE'), two kinds of elements (individuals and types), and two axioms ((A.1) and (A.2)). We shall name types in uppercase and individuals in uppercase letters followed by at least one digit.Considering the above analysis, a retrieval now is viewed as an attempt to prove some queried fact logically follows from the base facts (e.g., (2.1), (2.2)) and the built-in axioms (such as A.1 and A.2). For the purposes of this paper, we can consider aa~ t~ase facts to be atomic formulae (i.e., they contain no logical operators except negation). While compound formulae such as disjunctions can be represented, they are of little use to the semantic network retrieval facility, and so will not be considered in this paper. We have implemented a retriever along these lines and it is currently being used in the Rochester Dialogue System [Allen, 1982] .One Of the crucial facilities needed by natural language systems is the ability to reason about whether individuals are equal. This issue is often finessed in semantic networks by assuming that each node represents a different individual, or that every type in the type hierarchy is disjoint. This assumption has been called E-saturation by [Reiter, 1980] . A natural language understanding system using such a representation must decide on the referent of each description as the meaning representation is constructed, since if it creates a new individual as the referent, that individual will then be distinct from all previously known individuals. Since in actual discourse the referent of a description is not always recognized until a few sentences later, this approach lacks generality.One approach to this problem is to introduce full reasoning about equality into the representation, but this rapidly produces a combinatorially, prohibitive search space. Thus other more specialized techniques are desired. We shall consider mechanisms for proving inequality f'trst, and then methods for proving equality. Hendrix [1979] introduced some mechanisms that enable inequality to be proven. In his system, mere are two forms of subtype links, and two forms of instance links. This can be viewed in our system as follows: the SUBTYPE and TYPE predicates discussed above make no commitment regarding equality. However, a new relation, DSUBTYPE(tl,t2) , asserts that t 1 is a SUBTYPE of t 2, and also that the elements of t 1 are distinct from all other elements of other DSUBTYPES oft 2. This is captured by the axioms (A.4) v t, tl,t2,il,i2 (DSUBTYPE(tl,t) A DSUBTYPE(t2,t) A TYPE(il,tl) A TYPE(i2,t 2) A ~IDENTICAL(tl,t2)) --, (i 1 * i 2) (A.5) v tl,t DSUBTYPE(tl,t) ---, SUBTYPE(tl,t)We cannot express (A.4) in the current logic because the predicate IDFA',ITICAL operates on the syntactic form of its arguments rather than their referents. Two terms are IDENTICAL only if they are lexicaUy the same. To do this formally, we have to be able to refer to the syntactic form of terms. This can be done by introducing quotation into the logic along the lines of [Perlis, 1981] , but is not important for the point of this paper.A similar trick is done with elements of a single type. The predicate DTYPE(i,t) asserts that i is an instance of type t, and also is distinct from any other instances of t where the DTYPE holds. Thus we need(A.6) v il,i2,t (DTYPE(il,t) A DTYPE(i2,t) A ~ IDENTICAL(il,i2) ) • --, (i 1 * i 2) (A.7) vi, t DTYPE(i,t) ---, TYPE(i,t)Another extremely useful categorization of objects is the partitioning of a type into a set of subtypes, i.e., each element of the type is a member of exactly one subtype. This can be defined in a similar manner as above.Turning to methods for proving equality, [Tarjan, 1975] describes an efficient method for computing relations that form an equivalence class. This is adapted to support full equality reasoning on ground terms. Of course it cannot effectively handle conditional assertions of equality, but it covers many of the typical cases.Another technique for proving equality exploits knowledge about types. Many types are such that their instances are completely defined by their roles. For such a type T, if two instances I1 and 12 of T agree on all their respective rc!~ then they are equal. If I1 and I2 have a role where their values are not equal, then I I and I2 are not equal. If we finally add the assumption that every instance of T can be characterized by its set of role values, then we can enumerate the instances of type T using a function (say t) that has an argument for each role value.For example, consider the type AGE-RELS of age properties, which takes two roles, an OBJECT and a VALUE. Thus, the property P1 that captures the assertion "John is 10" would be described as follows:(33) TYPE(P1,AGE-RELS) AThe type AGE-RELS satisfies the above properties, so any individual of type AGE-RELS with OBJECT role JOHN1 and VALUE role 10 is equal to P1. The retriever encodes such knowledge in a preprocessing stage that assigns each individual of type AGE-RELS to a canonical name. The canonical name for P1 would simply be "age-rels(JOHNl,10)".Once a representation has equality, it can capture some of the distinctions made by perspectives in KRL. The same object viewed from two different perspectives is captured by two nodes, each with its own type, roles, and relations, that are asserted to be equal.Note that one cannot expect more sophisticated reasoning about equality than the above from the retriever itself. Identifying two objects as equal is typically not a logical inference. Rather, it is a plausible inference by some specialized program such as the reference component of a natural language system which has to identify noun phrases. While the facts represented here would assist such a component in identifying possible referencts for a noun phrase given its description, it is unlikely that they would logically imply what the referent is.Semantic networks are useful because they structure information so that it is easy to retrieve relevant facts, or facts about certain objects. Objects are represented only once in the network, and thus there is one place where one can find all relations involving that object (by following back over incoming ROLE arcs). While we need to be able to capture such an ability in our system, we should note that this is often not a very useful ability, for much of one's knowledge about an object will ,lot be attached to that object but will be acquired from the inheritance hierarchy. In a spreading activation type of framework, a considerable amount of irrelevant network will be searched before some fact high up in the type hierarchy is found. In addition, it is very seldom that one wants to be able to access all facts involving an object; it is much more likely that a subset of relations is relevant.If desired, such associative links between objects can be simulated in our system. One could find all properties of an object ol (including those by inheritance) by retrieving all bindings of x in the query 3x,r ROLE(x,r,ol).The ease of access provided by the links in a semantic network is effectively simulated simply by using a hashing scheme on the structure of all ROLE predicates. While the ability to hash on structures to find facts is crucial to an efficient implementation, the details are not central to our point here.Another important form of indexing is found in Hendrix where his partition mechanism is used to provide a focus of attention for inference processes [Grosz, 1977] . This is just one of the uses of partitions. Another, which we did not need, provided a facility for scoping facts within logical operators, similar to the use of parentheses in FOPC. Such a focus mechanism appears in our system as an extra argument on the main predicates (e.g., HOLDS, OCCURS, etc.).Since contexts are introduced as a new class of objects in the language, we can quantify over them and otherwise talk about them. In particular, we can organize contexts into a lattice-like structure (corresponding to Hendrix's vistas for partitions) by introducing a transitive relation SUBCONTEXT. As with the SUBTYPE relation, this axiom would defy an efficient implementation if the contexts were not organized in a finite lattice structure. Of course, we need axioms similar to (A,9) for the OCCURS and IS-RF_.AL predicates.We have argued that the appropriate way to design knowledge representations is to identify those inferences that one wishes to facilitate. Once these are identified, one can then design a specialized limited inference mechanism that can operate on a data base of first order facts. In this fashion, one obtains a highly expressive representation language (namely FOPC), as well as a well-defined and extendable retriever.We have demonstrated this approach by outlining a portion of the representation used in ARGOT, the Rochester Dialogue System [Allen, 1982] . We are currently extending the context mechanism to handle time, belief contexts (based on a syntactic theory of belief [Haas, 1982] ), simple hypothetical reasoning, and a representation of plans. Because the matcher is defined by a set of axioms, it is relatively simple to add new axioms that handle new features.For example, we are currently incorporating a model of temporal knowledge based on time intervals [Allen, 1981a] . This is done by allowing any object, event, or relation to be qualified by a time interval as follows: for any untimed concept x, and any time interval t, there is a timed concept consisting of x viewed during t which is expressed by the term (t-concept x t).This concept is of type (TIMED Tx), where Tx is the type of x. Thus we require a type hierarchy of timed concepts that mirrors the hierarchy of untimed concepts.Once this is done, we need to introduce new built-in axioms that extend the retriever. For instance, we define a predicate, DURING(a,b), that is true only if interval a is wholly contained in interval b. Now, if we want the retriever to automatically infer that if relation R holds during an interval t, then it holds in all subintervals of t, we need the following built-in axioms. First, DURING is transitive:(A.10) V a,b,c DURING(a,b) A DURING(b,c) --, DURING(a,c)Second, if P holds in interval t, it holds in all subintervals of t.(A.11) v p,t,t',c HOLDS(t-concept(p,t),c) A DURING(t' ,t) ---, HOLDS(t-concept(p,t'),c).Thus we have extended our representation to handle simple timed concepts with only a minimal amount of analysis.Unfortunately, we have not had the space to describe how to take the specification of the retriever (namely axioms (A.1) -(A.11)) and build an actual inference program out of it. A technique for building such a limited inference mechanism by moving to a meta-logic is described in [Frisch and Allen, 1982] .One of the more interesting consequences of this approach is that it has led to identifying various difference modes of retrieval that are necessary to support a natural language comprehension task, We have considered so far only one mode of retrieval, which we call provability mode. In this mode, the query must be shown to logically follow from the built-in axioms and the facts in the knowledge base. While this is the primary mode of interaction, others are also important.In consistency mode, the query is checked to see if it is logically consistent with the facts in the knowledge base with respect to the limited inference mechanism. While consistency in general is undecidable, with respect to the limited inference mechanism it is computationally feasible. Note that, since the retriever is defined by a set of axioms rather than a program, consistency mode is easy to define.Another important mode is compatibility mode, which is very useful for determining the referents of description. A query in compatibility mode succeeds if there is a set of equality and inequality assertions that can be assumed so that the query would succeed in provability mode. For instance, suppose someone refers to an event in which John hit someone with a hat. We would like to retrieve possible events that could be equal to this. Retrievals in compatibility mode are inherently expensive and so must be controlled using a context mechanism such as in [Grosz, 1977] . We are currently attempting to formalize this mode using Reiter's nonmonotonic logic for default reasoning.We have implemented a version of this system in HORNE [Allen and Frisch, 1981] , a LISP embedded logic programming language. In conjunction with this representation is a language which provides many abbreviations and facilities for system users. For instance, users can specify what context and times they are working with respect to, and then omit this information from their interactions with the system. Also, using the abbreviation conventions, the user can describe a relation and events without explicitly asserting the TYPE and ROLE assertions. Currently the system provides the inheritance hierarchy, simple equality reasoning, contexts, and temporal reasoning with the DURING hierarchy. | null | An important property of a natural language system is that it often has only partial information about the individuals (objects, events, and relations) that are talked about. Unless one assumes that the original linguistic analysis can resolve all these uncertainties and ambiguities, one needs to be able to represent partial knowledge. Furthermore, the things talked about do not necessarily correspond to the world: objects are described that don't exist, and events are described that do not occur.In order to be able to capture such issues we will need to include in the domain all conceivable individuals (cf. all conceivable concepts [Brachman, 1979] ). We will then need predicates that describe how these concepts correspond to reality. The class, of individuals in the world is subcategorized into three major classes: objects, events, and relations. We consider each in turn.Objects include all conceivable physical objects as well as abstract objects such as ideas, numbers, etc. The most important knowledge about any object is its type. Mechanisms for capturing this were outlined above. Properties of objects are inherited from statements involving universal quantification over the members of a type. The fact that a physical object, o, actually exists in the world will be asserted as 1S-REAL(o).The problems inherent in representing events and actions are well described by Davidson [1967] . He proposes introducing events as elements in the domain and introducing predicates that modify an event description by adding a role (e.g., agent, object) or by modifying the manner in which the event occurred. The same approach has been used in virtually all semantic network-and frame-based systems [Charniak, 1981b] , most of which use a case grammar [Fillmore, 1968] to influence the choice of role names. This approach also enables quantification over events and their components such as in the sentence, "For each event, the actor of the event causes that event." Thus, rather than representing the assertion that the ball fell by a sentence such as (try-l) FALL(BALL1), the more appropriate form is (try-2) 3 e TYPE(e,FALL-EVENTS) A OBJECT-ROLE(e,BALL1).This formalism, however, does not allow us to make assertions about roles in general, or to assert that an object plays some role in an event. For example, there is no way to express "Role fillers are unique" or "There is an event in which John played a role." Because we do not restrict ourselves to binary relations, we can generalize our representation by introducing the predicate ROLE and making rolenames into individuals in the domain. ROLE(o, r, v) asserts that individual o has a role named r that is filled with individual v. To distinguish rolenames from types and individuals, we shall use italics for rolenames.Finally, so that we can discuss events that did not occur (as opposed to saying that such an event doesn't exis0, we need to add the predicate OCCUR. OCCUR(e) asserts that event e actually occurred. Thus, finally, the assertion that the ball fell is expressed as 33 e TYPE(e,FALL-EVENTS) AOCCUR(e).Roles are associated with an event type by asserting that every individual of that type has the desired role.To assert that every event has an OBJECT role, we state Given this formulation, we could now represent that "some event occurred involving John" by (5) a e, rolename TYPE(e,EVENTS) A ROLE(e, rolename, JOHN1) A OCCUR(e) By querying fact (5) in our retriever, we can find all events involving John.One of the most important aspects of roles is that they are functional, e.g., each event has exactly one object role, etc. Since this is important in designing an efficient retriever, it is introduced as a built-in axiom:(A.3) v r,o,vl,v2 ROLE(o,r, vl) A ROLE(o,r,v2)--, (vl = v2).The final major type that needs discussing is the class of relations. The same problems that arise in representing events arise in representing relations, l:or instance, often the analysis of a simple noun-noun phrase such as "the book cook" initially may be only understood to the extent that some relationship holds between "book" and "cook." If we" want to represent this, we need to be able to partially describe relations. This problem is addressed in semantic networks by describing relations along the same lines as events.For example, rather than expressing "John is 10" as 6 As with events, describing a relation should not entail that the relation holds. If this were the case, it would be difficult to represent non-atomic sentences such as a disjunction, since in describing one of the disjuncts, we would be asserting that the disjunct holds. We assert that a relation, r, is true with HOLDS(r). Thus the assertion that "John is 10" would involve (7) conjoined withEQUATION] p TYPE(p,AGE-RELATIONS) A ROLE(p, OBJECT, JOHN1) A ROLE(p, VALUE, IO) ^ HOLDS(p)The assertion "John is not 10" is not the negation of (8), but is (7) conjoined with -HOLDS(p), i.e.,EQUATIONWe could also handle negation by introducing the type NO'I'-REIATIONS, which takes one rd. ~,.,,, is filled by another relation. To assert the above, we woutd construct an individual N1, of type NOT-RELATIONS, with its role filled with p, and assert that N1 holds. We see no advantage to this approach, however, since negation "moves through" the HOLDS predicate. In other words, the relation "not p" holding is equivalent to the relation "p" not holding. Disjunction and conjunction are treated in a similar manner.The system described so far, though simple, is close to providing us with one of the most characteristic inferences made by semantic networks, namely inheritance. For example, we might have the following sort of information in our network:(10) SUBTYPE(MAMMALS,ANIMALS) (11) S UBTYPE(2-LEGGED-ANIMALS,ANIMALS) (12) SUBTYPE(PERSONS,MAMMALS) (13) SUBTYPE(PERSONS,2-LEGGED-ANIMALS) (14) SUBTYPE(DOGS,MAMMALS) (15) TYPE(GEORGE1,PERSONS)In a notation like in [Hendrix, 1979] , these facts would be represented as:ANIMALS 2-LE MAMMALS PERSONS DOGS T GEORGE1In addition, let us assume we know that all instances of 2-LEGGED-ANIMALS have two legs and that all instances of MAMMALS are warm-blooded:(16) v x TYPE(x,2-LEGGF_.D-ANIMALS) HAS-2-LEGS(x) (17) v y TYPE(y,MAMMALS) . -~ WARM-BLOODED(y)These would be captured in the Hendrix formalism using his delineation mechanism.Note that relations such as "WARM-BLOODED" and "HAS-2-LEGS" should themselves be described as relations with roles, but that is not necessary for this example. Given these facts, and axioms (A.1) to (A.3), we can prove that "George has two legs" by using axiom (A.2) on (13) and 15 and then using (18) with (16) to conclude (19) HAS-2-LEGS(GEORGE1).In order to build a retriever that can perform these inferences automatically, we must be able to distinguish facts like (16) and (17) from arbitrary facts involving implications, for we cannot allow arbitrary chaining and retain efficiency. This could be done by checking for implications where the antecedent is composed entirely of type restrictions, but this is difficult to specify. The route we take follows the same technique described above when we introduced the TYPE and SUBTYPE predicates. We introduce new notation into the language that explicitly captures these cases. The new form is simply a version of the typed FOPC, where variables may be restricted by the type they range over. Thus, 16and 17 The retriever now can be implemented as a typed theorem prover that operates only on atomic base facts (now including 20and 21) and axioms (A.1) to (A.3).We now can deduce that GEORGE1 has two legs and that he is warm-blooded. Note that objects can be of many different types as well as types being subtypes of different types. Thus, we could have done the above without the type PERSONS, by making GEORGE1 of type 2-LEGGED-ANIMALS and MAMMALS. | In the previous section we saw how properties could be inherited. This inheritance applies to role assertions as well. For example, given a type EVILNTS that has an OBJECT role. i.e., Then if ACTIONS are a subtype of events, i.e., Another common technique used in semantic network systems is to introduce more specific types of a given type by specifying one (or more) of the role values. For instance, one might introduce a subtype of ACTION called ACTION-BY-JACK, i.e., (27) If we can put this into a form that is recognizable to the retriever, then we could assert such facts directly without having to introduce arbitrary new types.The extension we make this time is from what we called a type logic to a role logic. This allows quantified variables to be restricted by role values as well as type. Thus, in this new notation, (30) would be expressed as The retriever recognizes these new forms and fully reasons about the role restrictions. It is important to remember that each of these notation changes is an extension onto the original simple language. Everything that could be stated previously can still be stated. The new notation, besides often being more concise and convenient, is necessary only if the semantic network retrieval facilities are desired.Note also that we can now define the inverse of (28), and state that all actions with actor JACK are necessarily of type ACTION-BY-JACK. This can be expressed as 32v a:ACTIONS [ACTOR JACK] TYPE(a, ACTION-BY-JACK). | null | Main paper:
the basic representation: objects, events, and relations:
An important property of a natural language system is that it often has only partial information about the individuals (objects, events, and relations) that are talked about. Unless one assumes that the original linguistic analysis can resolve all these uncertainties and ambiguities, one needs to be able to represent partial knowledge. Furthermore, the things talked about do not necessarily correspond to the world: objects are described that don't exist, and events are described that do not occur.In order to be able to capture such issues we will need to include in the domain all conceivable individuals (cf. all conceivable concepts [Brachman, 1979] ). We will then need predicates that describe how these concepts correspond to reality. The class, of individuals in the world is subcategorized into three major classes: objects, events, and relations. We consider each in turn.Objects include all conceivable physical objects as well as abstract objects such as ideas, numbers, etc. The most important knowledge about any object is its type. Mechanisms for capturing this were outlined above. Properties of objects are inherited from statements involving universal quantification over the members of a type. The fact that a physical object, o, actually exists in the world will be asserted as 1S-REAL(o).The problems inherent in representing events and actions are well described by Davidson [1967] . He proposes introducing events as elements in the domain and introducing predicates that modify an event description by adding a role (e.g., agent, object) or by modifying the manner in which the event occurred. The same approach has been used in virtually all semantic network-and frame-based systems [Charniak, 1981b] , most of which use a case grammar [Fillmore, 1968] to influence the choice of role names. This approach also enables quantification over events and their components such as in the sentence, "For each event, the actor of the event causes that event." Thus, rather than representing the assertion that the ball fell by a sentence such as (try-l) FALL(BALL1), the more appropriate form is (try-2) 3 e TYPE(e,FALL-EVENTS) A OBJECT-ROLE(e,BALL1).This formalism, however, does not allow us to make assertions about roles in general, or to assert that an object plays some role in an event. For example, there is no way to express "Role fillers are unique" or "There is an event in which John played a role." Because we do not restrict ourselves to binary relations, we can generalize our representation by introducing the predicate ROLE and making rolenames into individuals in the domain. ROLE(o, r, v) asserts that individual o has a role named r that is filled with individual v. To distinguish rolenames from types and individuals, we shall use italics for rolenames.Finally, so that we can discuss events that did not occur (as opposed to saying that such an event doesn't exis0, we need to add the predicate OCCUR. OCCUR(e) asserts that event e actually occurred. Thus, finally, the assertion that the ball fell is expressed as 33 e TYPE(e,FALL-EVENTS) AOCCUR(e).Roles are associated with an event type by asserting that every individual of that type has the desired role.To assert that every event has an OBJECT role, we state Given this formulation, we could now represent that "some event occurred involving John" by (5) a e, rolename TYPE(e,EVENTS) A ROLE(e, rolename, JOHN1) A OCCUR(e) By querying fact (5) in our retriever, we can find all events involving John.One of the most important aspects of roles is that they are functional, e.g., each event has exactly one object role, etc. Since this is important in designing an efficient retriever, it is introduced as a built-in axiom:(A.3) v r,o,vl,v2 ROLE(o,r, vl) A ROLE(o,r,v2)--, (vl = v2).The final major type that needs discussing is the class of relations. The same problems that arise in representing events arise in representing relations, l:or instance, often the analysis of a simple noun-noun phrase such as "the book cook" initially may be only understood to the extent that some relationship holds between "book" and "cook." If we" want to represent this, we need to be able to partially describe relations. This problem is addressed in semantic networks by describing relations along the same lines as events.For example, rather than expressing "John is 10" as 6 As with events, describing a relation should not entail that the relation holds. If this were the case, it would be difficult to represent non-atomic sentences such as a disjunction, since in describing one of the disjuncts, we would be asserting that the disjunct holds. We assert that a relation, r, is true with HOLDS(r). Thus the assertion that "John is 10" would involve (7) conjoined withEQUATION] p TYPE(p,AGE-RELATIONS) A ROLE(p, OBJECT, JOHN1) A ROLE(p, VALUE, IO) ^ HOLDS(p)The assertion "John is not 10" is not the negation of (8), but is (7) conjoined with -HOLDS(p), i.e.,EQUATIONWe could also handle negation by introducing the type NO'I'-REIATIONS, which takes one rd. ~,.,,, is filled by another relation. To assert the above, we woutd construct an individual N1, of type NOT-RELATIONS, with its role filled with p, and assert that N1 holds. We see no advantage to this approach, however, since negation "moves through" the HOLDS predicate. In other words, the relation "not p" holding is equivalent to the relation "p" not holding. Disjunction and conjunction are treated in a similar manner.
making types work for you:
The system described so far, though simple, is close to providing us with one of the most characteristic inferences made by semantic networks, namely inheritance. For example, we might have the following sort of information in our network:(10) SUBTYPE(MAMMALS,ANIMALS) (11) S UBTYPE(2-LEGGED-ANIMALS,ANIMALS) (12) SUBTYPE(PERSONS,MAMMALS) (13) SUBTYPE(PERSONS,2-LEGGED-ANIMALS) (14) SUBTYPE(DOGS,MAMMALS) (15) TYPE(GEORGE1,PERSONS)In a notation like in [Hendrix, 1979] , these facts would be represented as:ANIMALS 2-LE MAMMALS PERSONS DOGS T GEORGE1In addition, let us assume we know that all instances of 2-LEGGED-ANIMALS have two legs and that all instances of MAMMALS are warm-blooded:(16) v x TYPE(x,2-LEGGF_.D-ANIMALS) HAS-2-LEGS(x) (17) v y TYPE(y,MAMMALS) . -~ WARM-BLOODED(y)These would be captured in the Hendrix formalism using his delineation mechanism.Note that relations such as "WARM-BLOODED" and "HAS-2-LEGS" should themselves be described as relations with roles, but that is not necessary for this example. Given these facts, and axioms (A.1) to (A.3), we can prove that "George has two legs" by using axiom (A.2) on (13) and 15 and then using (18) with (16) to conclude (19) HAS-2-LEGS(GEORGE1).In order to build a retriever that can perform these inferences automatically, we must be able to distinguish facts like (16) and (17) from arbitrary facts involving implications, for we cannot allow arbitrary chaining and retain efficiency. This could be done by checking for implications where the antecedent is composed entirely of type restrictions, but this is difficult to specify. The route we take follows the same technique described above when we introduced the TYPE and SUBTYPE predicates. We introduce new notation into the language that explicitly captures these cases. The new form is simply a version of the typed FOPC, where variables may be restricted by the type they range over. Thus, 16and 17 The retriever now can be implemented as a typed theorem prover that operates only on atomic base facts (now including 20and 21) and axioms (A.1) to (A.3).We now can deduce that GEORGE1 has two legs and that he is warm-blooded. Note that objects can be of many different types as well as types being subtypes of different types. Thus, we could have done the above without the type PERSONS, by making GEORGE1 of type 2-LEGGED-ANIMALS and MAMMALS.
making roles work for you:
In the previous section we saw how properties could be inherited. This inheritance applies to role assertions as well. For example, given a type EVILNTS that has an OBJECT role. i.e., Then if ACTIONS are a subtype of events, i.e., Another common technique used in semantic network systems is to introduce more specific types of a given type by specifying one (or more) of the role values. For instance, one might introduce a subtype of ACTION called ACTION-BY-JACK, i.e., (27) If we can put this into a form that is recognizable to the retriever, then we could assert such facts directly without having to introduce arbitrary new types.The extension we make this time is from what we called a type logic to a role logic. This allows quantified variables to be restricted by role values as well as type. Thus, in this new notation, (30) would be expressed as The retriever recognizes these new forms and fully reasons about the role restrictions. It is important to remember that each of these notation changes is an extension onto the original simple language. Everything that could be stated previously can still be stated. The new notation, besides often being more concise and convenient, is necessary only if the semantic network retrieval facilities are desired.Note also that we can now define the inverse of (28), and state that all actions with actor JACK are necessarily of type ACTION-BY-JACK. This can be expressed as 32v a:ACTIONS [ACTOR JACK] TYPE(a, ACTION-BY-JACK).
equality:
One Of the crucial facilities needed by natural language systems is the ability to reason about whether individuals are equal. This issue is often finessed in semantic networks by assuming that each node represents a different individual, or that every type in the type hierarchy is disjoint. This assumption has been called E-saturation by [Reiter, 1980] . A natural language understanding system using such a representation must decide on the referent of each description as the meaning representation is constructed, since if it creates a new individual as the referent, that individual will then be distinct from all previously known individuals. Since in actual discourse the referent of a description is not always recognized until a few sentences later, this approach lacks generality.One approach to this problem is to introduce full reasoning about equality into the representation, but this rapidly produces a combinatorially, prohibitive search space. Thus other more specialized techniques are desired. We shall consider mechanisms for proving inequality f'trst, and then methods for proving equality. Hendrix [1979] introduced some mechanisms that enable inequality to be proven. In his system, mere are two forms of subtype links, and two forms of instance links. This can be viewed in our system as follows: the SUBTYPE and TYPE predicates discussed above make no commitment regarding equality. However, a new relation, DSUBTYPE(tl,t2) , asserts that t 1 is a SUBTYPE of t 2, and also that the elements of t 1 are distinct from all other elements of other DSUBTYPES oft 2. This is captured by the axioms (A.4) v t, tl,t2,il,i2 (DSUBTYPE(tl,t) A DSUBTYPE(t2,t) A TYPE(il,tl) A TYPE(i2,t 2) A ~IDENTICAL(tl,t2)) --, (i 1 * i 2) (A.5) v tl,t DSUBTYPE(tl,t) ---, SUBTYPE(tl,t)We cannot express (A.4) in the current logic because the predicate IDFA',ITICAL operates on the syntactic form of its arguments rather than their referents. Two terms are IDENTICAL only if they are lexicaUy the same. To do this formally, we have to be able to refer to the syntactic form of terms. This can be done by introducing quotation into the logic along the lines of [Perlis, 1981] , but is not important for the point of this paper.A similar trick is done with elements of a single type. The predicate DTYPE(i,t) asserts that i is an instance of type t, and also is distinct from any other instances of t where the DTYPE holds. Thus we need(A.6) v il,i2,t (DTYPE(il,t) A DTYPE(i2,t) A ~ IDENTICAL(il,i2) ) • --, (i 1 * i 2) (A.7) vi, t DTYPE(i,t) ---, TYPE(i,t)Another extremely useful categorization of objects is the partitioning of a type into a set of subtypes, i.e., each element of the type is a member of exactly one subtype. This can be defined in a similar manner as above.Turning to methods for proving equality, [Tarjan, 1975] describes an efficient method for computing relations that form an equivalence class. This is adapted to support full equality reasoning on ground terms. Of course it cannot effectively handle conditional assertions of equality, but it covers many of the typical cases.Another technique for proving equality exploits knowledge about types. Many types are such that their instances are completely defined by their roles. For such a type T, if two instances I1 and 12 of T agree on all their respective rc!~ then they are equal. If I1 and I2 have a role where their values are not equal, then I I and I2 are not equal. If we finally add the assumption that every instance of T can be characterized by its set of role values, then we can enumerate the instances of type T using a function (say t) that has an argument for each role value.For example, consider the type AGE-RELS of age properties, which takes two roles, an OBJECT and a VALUE. Thus, the property P1 that captures the assertion "John is 10" would be described as follows:(33) TYPE(P1,AGE-RELS) AThe type AGE-RELS satisfies the above properties, so any individual of type AGE-RELS with OBJECT role JOHN1 and VALUE role 10 is equal to P1. The retriever encodes such knowledge in a preprocessing stage that assigns each individual of type AGE-RELS to a canonical name. The canonical name for P1 would simply be "age-rels(JOHNl,10)".Once a representation has equality, it can capture some of the distinctions made by perspectives in KRL. The same object viewed from two different perspectives is captured by two nodes, each with its own type, roles, and relations, that are asserted to be equal.Note that one cannot expect more sophisticated reasoning about equality than the above from the retriever itself. Identifying two objects as equal is typically not a logical inference. Rather, it is a plausible inference by some specialized program such as the reference component of a natural language system which has to identify noun phrases. While the facts represented here would assist such a component in identifying possible referencts for a noun phrase given its description, it is unlikely that they would logically imply what the referent is.
associations and partitions:
Semantic networks are useful because they structure information so that it is easy to retrieve relevant facts, or facts about certain objects. Objects are represented only once in the network, and thus there is one place where one can find all relations involving that object (by following back over incoming ROLE arcs). While we need to be able to capture such an ability in our system, we should note that this is often not a very useful ability, for much of one's knowledge about an object will ,lot be attached to that object but will be acquired from the inheritance hierarchy. In a spreading activation type of framework, a considerable amount of irrelevant network will be searched before some fact high up in the type hierarchy is found. In addition, it is very seldom that one wants to be able to access all facts involving an object; it is much more likely that a subset of relations is relevant.If desired, such associative links between objects can be simulated in our system. One could find all properties of an object ol (including those by inheritance) by retrieving all bindings of x in the query 3x,r ROLE(x,r,ol).The ease of access provided by the links in a semantic network is effectively simulated simply by using a hashing scheme on the structure of all ROLE predicates. While the ability to hash on structures to find facts is crucial to an efficient implementation, the details are not central to our point here.Another important form of indexing is found in Hendrix where his partition mechanism is used to provide a focus of attention for inference processes [Grosz, 1977] . This is just one of the uses of partitions. Another, which we did not need, provided a facility for scoping facts within logical operators, similar to the use of parentheses in FOPC. Such a focus mechanism appears in our system as an extra argument on the main predicates (e.g., HOLDS, OCCURS, etc.).Since contexts are introduced as a new class of objects in the language, we can quantify over them and otherwise talk about them. In particular, we can organize contexts into a lattice-like structure (corresponding to Hendrix's vistas for partitions) by introducing a transitive relation SUBCONTEXT. As with the SUBTYPE relation, this axiom would defy an efficient implementation if the contexts were not organized in a finite lattice structure. Of course, we need axioms similar to (A,9) for the OCCURS and IS-RF_.AL predicates.
discussion:
We have argued that the appropriate way to design knowledge representations is to identify those inferences that one wishes to facilitate. Once these are identified, one can then design a specialized limited inference mechanism that can operate on a data base of first order facts. In this fashion, one obtains a highly expressive representation language (namely FOPC), as well as a well-defined and extendable retriever.We have demonstrated this approach by outlining a portion of the representation used in ARGOT, the Rochester Dialogue System [Allen, 1982] . We are currently extending the context mechanism to handle time, belief contexts (based on a syntactic theory of belief [Haas, 1982] ), simple hypothetical reasoning, and a representation of plans. Because the matcher is defined by a set of axioms, it is relatively simple to add new axioms that handle new features.For example, we are currently incorporating a model of temporal knowledge based on time intervals [Allen, 1981a] . This is done by allowing any object, event, or relation to be qualified by a time interval as follows: for any untimed concept x, and any time interval t, there is a timed concept consisting of x viewed during t which is expressed by the term (t-concept x t).This concept is of type (TIMED Tx), where Tx is the type of x. Thus we require a type hierarchy of timed concepts that mirrors the hierarchy of untimed concepts.Once this is done, we need to introduce new built-in axioms that extend the retriever. For instance, we define a predicate, DURING(a,b), that is true only if interval a is wholly contained in interval b. Now, if we want the retriever to automatically infer that if relation R holds during an interval t, then it holds in all subintervals of t, we need the following built-in axioms. First, DURING is transitive:(A.10) V a,b,c DURING(a,b) A DURING(b,c) --, DURING(a,c)Second, if P holds in interval t, it holds in all subintervals of t.(A.11) v p,t,t',c HOLDS(t-concept(p,t),c) A DURING(t' ,t) ---, HOLDS(t-concept(p,t'),c).Thus we have extended our representation to handle simple timed concepts with only a minimal amount of analysis.Unfortunately, we have not had the space to describe how to take the specification of the retriever (namely axioms (A.1) -(A.11)) and build an actual inference program out of it. A technique for building such a limited inference mechanism by moving to a meta-logic is described in [Frisch and Allen, 1982] .One of the more interesting consequences of this approach is that it has led to identifying various difference modes of retrieval that are necessary to support a natural language comprehension task, We have considered so far only one mode of retrieval, which we call provability mode. In this mode, the query must be shown to logically follow from the built-in axioms and the facts in the knowledge base. While this is the primary mode of interaction, others are also important.In consistency mode, the query is checked to see if it is logically consistent with the facts in the knowledge base with respect to the limited inference mechanism. While consistency in general is undecidable, with respect to the limited inference mechanism it is computationally feasible. Note that, since the retriever is defined by a set of axioms rather than a program, consistency mode is easy to define.Another important mode is compatibility mode, which is very useful for determining the referents of description. A query in compatibility mode succeeds if there is a set of equality and inequality assertions that can be assumed so that the query would succeed in provability mode. For instance, suppose someone refers to an event in which John hit someone with a hat. We would like to retrieve possible events that could be equal to this. Retrievals in compatibility mode are inherently expensive and so must be controlled using a context mechanism such as in [Grosz, 1977] . We are currently attempting to formalize this mode using Reiter's nonmonotonic logic for default reasoning.We have implemented a version of this system in HORNE [Allen and Frisch, 1981] , a LISP embedded logic programming language. In conjunction with this representation is a language which provides many abbreviations and facilities for system users. For instance, users can specify what context and times they are working with respect to, and then omit this information from their interactions with the system. Also, using the abbreviation conventions, the user can describe a relation and events without explicitly asserting the TYPE and ROLE assertions. Currently the system provides the inheritance hierarchy, simple equality reasoning, contexts, and temporal reasoning with the DURING hierarchy.
i. introduction:
We are engaged in a long-term project to construct a system that can partake in extended English dialogues on some reasonably well specified range of topics. A major part of this effort so far has been the specification of a knowledge representation. Because of the wide range of issues that we are trying to capture, which includes the representation of plans, actions, time, and individuals' beliefs and intentions, it is crucial to work within a framework general enough to accommodate each issue. Thus, we began developing our representation within the first-order predicate calculus. So far, this has presented no problems, and we aim to continue within this framework until some problem forces us to do otherwise.Given this framework, we need to be able to build reasonably efficient systems for use in the project. In particular, the knowledge representation must be able to support the natural language understanding task. This requires that certain forms of inference must be made. ~' Within a general theorem-proving framework, however, those inferences desired would be lost within a wide range of undesired inferences. Thus we have spent considerable effort in constructing a specialized inference component that can support the language understanding task.Before such a component could be built, we needed to identify what inferences were desired. Not surprisingly, much of the behavior we desire can be found within existing semantic network systems used for natural language understanding. Thus the question "What inferences do we need?" can be answered by answering the question "What's in a semantic network?" Ever since Woods's [1975] "What's in a Link" paper, there has been a growing concern for formalization in the study of knowledge representation. Several arguments have been made that frame representation languages and semantic-network languages are syntactic variants of the f~st-order predicate calculus (FOPC). The typical argument (e.g., [Hayes, 1979; Nilsson, 1980; Charniak, 1981a] ) proceeds by showing how any given frame or network representation can be mapped to a logically isomorphic (i.e., logically equivalent when the mapping between the two notations is accounted for) FOPC representation. We emphasize the term "logically isomorphic" because these arguments have primarily dealt with the content (semantics) of the representations rather than their forms (syntax). Though these arguments are valid and scientifically important, they do not answer our question.Semantic networks not only represent information but facilitate the retrieval of relevant facts. For instance, all the facts about the object JOHN are stored with a pointer directly to one node representing JOHN (e.g., see the papers in [Findler, 1979] ). Another example concerns the inheritance of properties. Given a fact such as "All canaries are yellow," most network systems would automatically conclude that "Tweety is yellow," given that Tweety is a canary. This is typically implemented within the network matcher or retriever.We have demonstrated elsewhere [Frisch and Allen, 1982] the utility of viewing a knowledge retriever as a specialized inference engine (theorem prover). A specialized inference engine is tailored to treat certain predicate, function, and constant symbols differently than others. This is done by building into the inference engine certain true sentences involving these symbols and the control needed to handle with these sentences. The inference engine must also be able to recognize when it is able to use its specialized machinery. That is, its specialized knowledge must be coupled to the form of the situations that it can deal with.For illustration, consider an instance of the ubiquitous type hierarchies of semantic networks:FORDS I subtype MUSTANGS l type OLD-BLACKBy mapping the types AUTOS and MUSTANGS to be predicates which are true only of automobiles and mustangs respectively, the following two FOPC sentences are logically isomorphic to the network:(1.1) V x MUSTANGS(x) --) FORDS(x) (1.2) MUSTANGS(OLD-BLACK1)However, these two sentences have not captured the form of the network, and furthermore, not doing so is problematic to the design of a retriever. The subtype and type links have been built into the network language because the network retriever has been built to handle them specially. That is, the retriever does not view a subtype link as an arbitrary implication such as (1.1) and it does not view a type link as an arbitrary atomic sentence such as (1.2).In our representation language we capture the form as wetl as the content of the network. By introducing two predicates, TYPE and SUBTYPE, we capture the meaning of the type and subtype links. TYPE(~O is true iff the individual i is a member of the type (set of objects) t, and SUBTYPE(tl, t 2) is true iff the type t I is a subtype (subset) of the type t 2. Thus, in our language, the following two sentences would be used to represent what was intended by the network:(2.1) SUBTYPE(FORDS,MUSTANGS) (2.2) TYPE(OLD-BLACK1,FORDS)It is now easy to build a retriever that recognizes subtype and type assertions by matching predicate names. Contrast this to the case where the representation language used (1.1) and (1.2) and the retriever would have to recognize these as sentences to be handled in a special manner.But what must the retriever know about the SUBTYPE and TYPE predicates in order that it can reason (make inferences) with them? There are two assertions, (A.1) and (A.2), such that {(1.1),(1.2)} is logically isomorphic to {(2.1),(2.2),(A.1),(A.2)}. (Note: throughout this paper, axioms that define the retriever's capabilities will be referred to as built-in axioms and specially labeled A.1, A.2, etc.)(A.1) v tl,t2,t 3 SUBTYPE(tl,t2) A SUBTYPE(t2,t3) --, SUBTYPE(tl,t3) (SUBTYPE is transitive.) (A.2) v O,tl,t 2 TYPE(o,tl) A SUBTYPE(tl,t2) TYPE(o,t2)(Every member of a given type is a member of its supertypes.)The retriever will also need to know how to control inferences with these axioms, but this issue is considered only briefly in this paper.The design of a semantic-network language often continues by introducing new kinds of nodes and links into the language. This process may terminate with a fixed set of node and link types that are the knowledgestructuring primitives out of which all representations are built. Others have referred to these knowledgestructuring primitives as epistemological primitives [Brachman, 1979] , structural relations [Shapiro, 1979] , and system relations [Shapiro, 1971] . If a fLxed set of knowledge-structuring primitives is used in the language, then a retriever can be built that knows how to deal with all of them.The design of our representation language very much mimics this approach. Our knowledge-structuring primitives include a fixed set of predicate names and terms denoting three kinds of elements in the domain. We give meaning to these primitives by writing domainindependent axioms involving them. Thus far in this paper we have introduced two predicates (TYPE and SUBTYPE'), two kinds of elements (individuals and types), and two axioms ((A.1) and (A.2)). We shall name types in uppercase and individuals in uppercase letters followed by at least one digit.Considering the above analysis, a retrieval now is viewed as an attempt to prove some queried fact logically follows from the base facts (e.g., (2.1), (2.2)) and the built-in axioms (such as A.1 and A.2). For the purposes of this paper, we can consider aa~ t~ase facts to be atomic formulae (i.e., they contain no logical operators except negation). While compound formulae such as disjunctions can be represented, they are of little use to the semantic network retrieval facility, and so will not be considered in this paper. We have implemented a retriever along these lines and it is currently being used in the Rochester Dialogue System [Allen, 1982] .
Appendix:
| null | null | null | null | {
"paperhash": [
"kowalski|logic_for_problem_solving",
"allen|an_interval-based_representation_of_temporal_knowledge",
"allen|what’s_necessary_to_hide?:_modeling_action_verbs",
"bobrow|an_overview_of_krl,_a_knowledge_representation_language",
"tarjan|efficiency_of_a_good_but_not_linear_set_union_algorithm",
"shapiro|a_net_structure_for_semantic_information_storage,_deduction_and_retrieval",
"haas|mental_states_and_mental_actions_in_planning",
"perlis|language,_computation,_and_reality",
"findler|associative_networks-_representation_and_use_of_knowledge_by_computers",
"grosz|the_representation_and_use_of_focus_in_dialogue_understanding."
],
"title": [
"Logic for problem solving",
"An Interval-Based Representation of Temporal Knowledge",
"What’s Necessary to Hide?: Modeling Action Verbs",
"An overview of KRL, a Knowledge Representation Language",
"Efficiency of a Good But Not Linear Set Union Algorithm",
"A Net Structure for Semantic Information Storage, Deduction and Retrieval",
"Mental states and mental actions in planning",
"Language, computation, and reality",
"Associative Networks- Representation and Use of Knowledge by Computers",
"The representation and use of focus in dialogue understanding."
],
"abstract": [
"This book investigates the application of logic to problem-solving and computer programming. It assumes no previous knowledge of these fields, and may be Karl duncker in addition to make difficult fill one of productive. The unifying epistemological virtues of program variables tuples in different terminologies he wants. Functional fixedness which appropriate solutions are most common barrier. Social psychologists over a goal is represented can take. There is often largely unintuitive and, all be overcome standardized procedures like copies? Functional fixedness it can be made possible for certain fields looks. In the solution paths or pencil. After toiling over the ultimate mentions that people cling rigidly to strain on. Luckily the book for knowledge of atomic sentences or fundamental skills. Functional fixedness is a problem solving techniques such.",
"This paper describes a method for maintaining the relationships between temporal intervals in a hierarchical manner using constraint propagation techniques. The representation includes a notion of the present moment (i.e., \"now\"), and allows one to represent intervals that may extend indefinitely into the past/future. \n \nThis research was supported in part by the National Science Foundation under Grant Number IST-80-12418, and in part by the Office of Naval Research under Grant Number N00014-80-O0197.",
"This paper considers what types of knowledge one must possess in order to reason about actions. Rather than concentrating on how actions are performed, as is done in the problem-solving literature, it examines the set of conditions under which an action can be said to have occurred. In other words, if one is told that action A occurred, what can be inferred about the state of the world? In particular, if the representation can define such conditions, it must have good models of time, belief, and intention. This paper discusses these issues and suggests a formalism in which general actions and events can be defined. Throughout, the action of hiding a book from someone is used as a motivating example.",
"This paper describes KRL, a Knowledge Representation Language designed for use in understander systems. It outlines both the general concepts which underlie our research and the details of KRL-0, an experimental implementation of some of these concepts. KRL is an attempt to integrate procedural knowledge with a broad base of declarative forms. These forms provide a variety of ways to express the logical structure of the knowledge, in order to give flexibility in associating procedures (for memory and reasoning) with specific pieces of knowledge, and to control the relative accessibility of different facts and descriptions. The formalism for declarative knowledge is based on structured conceptual objects with associated descriptions. These objects form a network of memory units with several different sorts of linkages, each having well-specified implications for the retrieval process. Procedures can be associated directly with the internal structure of a conceptual object. This procedural attachment allows the steps for a particular operation to be determined by characteristics of the specific entities involved. The control structure of KRL is based on the belief that the next generation of intelligent programs will integrate data-directed and goal-directed processing by using multi-processing. It provides for a priority-ordered multi-process agenda with explicit (user-provided) strategies for scheduling and resource allocation. It provides procedure directories which operate along with process frameworks to allow procedural parameterization of the fundamental system processes for building, comparing, and retrieving memory structures. Future development of KRL will include integrating procedure definition with the descriptive formalism.",
"TWO types of instructmns for mampulating a family of disjoint sets which partitmn a umverse of n elements are considered FIND(x) computes the name of the (unique) set containing element x UNION(A, B, C) combines sets A and B into a new set named C. A known algorithm for implementing sequences of these mstructmns is examined It is shown that, if t(m, n) as the maximum time reqmred by a sequence of m > n FINDs and n -- 1 intermixed UNIONs, then kima(m, n) _~ t(m, n) < k:ma(m, n) for some positive constants ki and k2, where a(m, n) is related to a functional inverse of Ackermann's functmn and as very slow-growing.",
"This paper describes a data structure, MENS (MEmory Net Structure), that is useful for storing semantic information stemming from a natural language, and a system, MENTAL (MEmory Net That Answers and Learns) that interacts with a user (human or program), stores information into and retrieves information from MENS and interprets some information in MENS as rules telling it how to deduce new information from what is already stored. MENTAL can be used as a guestion-answering system with formatted input /output, as a vehicle for experimenting with various theories of semantic structures or as the memory management portion of a natural language question-answering system.",
"An intelligent agent needs to plan for mental goals as well as physical goals. A mental state can be an end in itself (trying to find out John's phone number) or a prerequisite to a physical action (finding out John's phone number in order to call him). A formal theory of mental states and actions is a first step towards a program that can plan to achieve mental goals. This thesis describes such a theory and its use in planning. The theory is an attempt to formalize naive psychology--what common sense says about the mind. \nThe common-sense ideas we try to formalize are roughly as follows. An intelligent agent has beliefs, which can be true or false. New beliefs can be formed by perception, by introspection, or by deduction from old beliefs. An agent must retrieve a belief from his memory before he can use it. When an agent attempts to achieve a goal, he tries to infer from his beliefs that he can achieve the goal by executing a plan P. If he finds such a P, he will execute it. If you know an agent's goal and his beliefs, you can predict his actions fairly well by trying to find a plan P such that the agent's beliefs entail that he can achieve his goal by executing P. \nThe theory says that an agent's beliefs are sentences of an internal language. This language resembles first-order logic except that it includes quotation. This allows agents to have beliefs about their own beliefs. A series of axioms in this quoted logic formalizes the ideas above. These axioms are used to prove correctness of a number of plans that involve mental actions and that predict the behavior of other agents. A proof-checking program tests that the axioms really entail the desired theorems. Thus the thesis formalizes knowledge about the human mind that has never been presented rigorously before. It applies this knowledge to two planning problems: planning mental actions, and planning to deal with other agents.",
"The main theme of this thesis is the interplay of assertion and meaning, or quotation and un-quotation, in reasoning entities. This is motivated largely by analysis of the notion of possibility in several contexts, most specifically, in relation to resource-limited computational models of belief and inference, as well as in philosophy of science. \nA first-order treatment of quotation and un-quotation is given that allows broad and paradox-free expression of syntax and semantics. It is argued that this makes unnecessary the usual hierarchical constructions for notions such as default reasoning, theory subsumption, concepts, beliefs, and self-reference, and indeed that even greater expressive power is achieved than in those treatments, with reduced complexity of notation. \nThis is then applied to a model of belief and inference in which focus of attention is a key element. Effort is made to isolate certain automatic inferences apparently part of the very meaning of propositional beliefs, and then base more sophisticated thinking on these. \nFinally, some thoughts are presented on how resource-limited computation may bear on the notion of possibility in foundations of physics and modal logic.",
"Upon opening this book and leafing through the pages, one gets the impression of an important compendium. The fourteen articles provide good coverage of semantic networks and related systems for representing knowledge. Their average length of 33 pages is long enough to give each author reasonable scope, yet short enough to permit a variety of viewpoints to be expressed in a single volume. The editor should be commended for his efforts in putting together a wellorganized book instead of just another collection of unrelated papers.",
"Abstract : This report develops a representation of focus of attention thatcircumscribes discourse contexts within a general representation ofknowledge. Focus of attention is essential to any comprehension processbecause what and how a person understands is strongly influenced bywhere his attention is directed at a given moment. To formalize thenotion of focus, the need for and the use of focus mechanisms areconsidered from the standpoint of building a computer system that canparticipate in a natural language dialogue with a ser, Two ranges offocus, global and immediate, are investigated, and representations forincorporating them in a computer system are developed.The global focus in which an utterance is interpreted is determinedby the total discourse and situational setting of the utterance. Itinfluences what is talked about, how different concepts are introduced,and how concepts are referenced. To encode global focuscomputationally, a representation is developed that highlights thoseitems that are relevant at a given place in a dialogue. The underlyingknowledge representation is segmented into subunits, called focusspaces, that contain those items that are in the focus of attention of adialogue participant during a particular part of the dialogue.Mechanisms are required for updating the focus representation,because, as a dialogue progresses, the objects and actions that arerelevant to the conversation, and therefore in the participants' focusof attention, change. Procedures are described for deciding when andhow to shift focus in task-oriented dialogues, i.e., in dialogues inwhich the participants are cooperating in a shared task. Theseprocedures are guided by a representation of the task being performed.The ability to represent focus of attention in a languageunderstanding system results in a new approach to an important problemin discourse comprehension -- the identification of the referents ofdefinite noun phrases."
],
"authors": [
{
"name": [
"R. Kowalski"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"James F. Allen"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"James F. Allen"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"D. Bobrow",
"T. Winograd"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"R. Tarjan"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"S. Shapiro"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"A. Haas"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"D. Perlis"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"N. Findler"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"B. Grosz"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
}
]
}
],
"arxiv_id": [
null,
null,
null,
null,
null,
null,
null,
null,
null,
null
],
"s2_corpus_id": [
"5285557",
"17510255",
"18978174",
"7965074",
"11105749",
"33714788",
"142560200",
"60553326",
"15616277",
"61114426"
],
"intents": [
[],
[
"methodology"
],
[
"methodology"
],
[],
[],
[],
[],
[],
[
"background"
],
[]
],
"isInfluential": [
false,
false,
false,
false,
false,
false,
false,
false,
false,
false
]
} | Problem: The paper addresses the need for formalization in the study of knowledge representation, particularly in the context of knowledge retrievers and representation languages.
Solution: The paper proposes a representation language in the notation of First-Order Predicate Calculus (FOPC) to facilitate the design of a semantic-network-like retriever for extended English dialogues on various topics. | 512 | 0.085938 | null | null | null | null | null | null | null | null |
13c538c6600774bd32eeceb982e314a9cc2d0c5d | 237295822 | null | Summary of discussion (Sessions 1 {\&} 2) | The bulk of the discussion following the first two sessions centred on the methodologies used for establishing standard and special interest glossaries and vocabularies and the optimum methods for disseminating and assessing them. * Note (Rapporteuse): Cheap to reproduce, once the initial investment in microfiche production equipment has been made. | {
"name": [
"Mayorcas-Cohen, Pamela"
],
"affiliation": [
null
]
} | null | null | Proceedings of Translating and the Computer: Term banks for tomorrow{'}s world | 1982-11-01 | 0 | 0 | null | In view of the high cost involved in publishing multilingual editions of glossaries such as the very useful welding glossaries, it was suggested that translators would favour the production of bilingual and multi-lingual editions on microfiches. Microfiches could be produced quite cheaply* and provide compact storage; furthermore, updated versions could be sent to subscribers at regular intervals. The International Institute of Welding was examining a number of alternatives to hard copy publication and microfiche was one of the options; however, no final decision had been taken. Despite their advantages, there was still user resistance to microfiches, while the issuing and filing of updates required additional effort on the part of both publisher and subscriber.It was suggested that since welding was essentially a craft activity its terminology would have tended to be rather parochial, with significant regional variations, and that there would be considerable difficulties in establishing a consolidated international vocabulary. In fact, it was the growing importance of welding as an industrial process, for such varied products as atomic reactors and household gadgets, which had prompted the IIW to create a multilingual terminology for use by the international community. While regional variations did persist and usage could vary, as between American and British English, or French, Swiss and Belgian French, national standards bodies were endeavouring to 'iron out' disparities at national level and encourage the use of a more homogeneous vocabulary.Great interest was shown in the work being carried out in Israel for creating new terms. Delegates were referred to the Encyclopaedia Judaeica for a detailed description of the history and development of modern Hebrew, and for the principles of Hebrew lexicology. The Academy of the Hebrew Language operated a system similar to that of BSI with regard to standardisation of terminology. The Academy did not initiate work on new dictionaries and word lists but responded to requests from the press, from industry and from academics. These professionals were the first non-native speakers of Hebrew and were inevitably influenced by their mother tongue when faced with the need to create new Hebrew words in their sphere of activity. This could result in the creation of a number of synonymous calques within a single industry or university department.On receiving a request or enquiry, the Academy would consult the appropriate subject expert. Where no preferred Hebrew term existed, a committee of authors and linguists would solicit suggestions. These would be circulated amongst the subject experts and submitted to a grammar committee. A full plenary session of the Academy, comprising authors, linguists and translators (23 in all) would then be called to approve or reject the term. If no consensus were reached, the matter would be referred back to the original committee which would repeat the consultation process. Once approved, a term would be published in the Academy's official gazette. Only government departments were legally obliged to use the term. Terms became part of the language if they found general acceptance amongst the general public and the Academy eagerly sought public reaction and comment. It was explained that Arabic neologisms were generally unacceptable under the rules of modern Hebrew lexicology, even though both languages shared the three-consonant root structure and similar word-building patterns. There was little control in the Arabic countries over the entry of foreign words into the language. Furthermore, Arabic itself was not a uniform language but varied from Baghdad to Damascus to Cairo. Doubt was expressed as to the usefulness of glossaries and terminology standards as currently conceived by their compilers and publishers. Dictionaries based on standard vocabularies tended to leave out words which were in common use and were too narrow in their scope. The United Nations was cited as an example of a multilingual environment where many delegates were non-native speakers of the official language who tended to use a varied and non-standard vocabulary. It was suggested that more standards organisations should adopt the practice of including lists of non-recommended terms (termes déconseillés), which would be marked as such.While it was true that multilingual vocabulary lists were not generally held in high esteem by translators, they could be helpful in identifying the source of calques devised by non-native speakers of a language. The general feeling was that publishers and compilers of standard vocabularies and special word lists should pay more attention to the expectations translators had of such tools.The perennial chestnut was raised of the need for some formal body to control the use of English, in the form of an Academy for the Advancement of English. Such an Academy would protect and refine the language and lay down proscribed and prescribed forms. Translators with their long history and experience in the use of language should play a leading role in such a body. A straw poll of conference delegates showed that there was little support for such a body. The panel considered that the notion though an ideal one was not feasible, for a variety of reasons. English, like Arabic, existed in several different forms all over the world and an Academy would have to permit the legitimate 'big' variants from North America and Australia as well as the large number of pidgin languages based on English.Technical vocabulary tended to suffer wherever officialdom tried to interfere. French was cited as an example of a language where engineers and technicians used one common, well-known word while government officials used long and little-known circumlocutions. Thus parallel vocabularies tended to emerge. The problem was particularly acute on international committees where technical experts and official government representatives would use two words for the same thing.Lastly, it was suggested that speakers and users of English were simply not susceptible to formal controls.In spite of the conference title "Translating and the Computer", or rather because of it, it was considered surprising that the first three papers in Session 1 had scarcely mentioned computers, either in relation to the production of glossaries and dictionaries* or as regards translators' access to them. The implication seemed to be that conventional printed copy continued to offer the best access to terminology even where this was held on a term bank and that access via the computer terminal was cumbersome and expensive. It was pointed out that computer-stored terminology provided the raw material from which reasonably-priced tailor-made printed requirements could be produced.Dr Yannai would shortly be provided with a desk-top terminal. At present he searches a dictionary but as 5,000 terms are added every six months it will be difficult to search printout until computer techniques have improved. One suggestion put forward to explain why computers were not yet universally used for glossary circulation and production was that this was still a problem of scale. Given the cost of the initial installation, conventional production and publication methods continue to be cost-effective for relatively small and highly-specialised glossaries.The time would shortly be arriving when computers would offer the only viable means for storing, controlling and updating the explosion of technical vocabulary in all fields and in all languages. It was true that retrieval interrogation techniques for term banks needed to be improved.** Concern was also expressed at the apparent duplication of effort at national and international levels both as regards the creation of technical vocabularies and the development of term banks. If users could not find a product to satisfy their immediate needs they would tend to provide their own solution.** On the whole the panel felt that as data banks and networks became more commonplace and offered cheaper tariffs, the trend would be away from the wealth of printed dictionaries to computer stored terminology. * Note: (Rapporteuse): the IIW, IEC and ice glossaries ** This theme was taken up again in the discussion following Session 5.Turning from words to pictures, a delegate expressed concern at the lack of international standards for graphical symbols and schematics. What, if anything, were the international and technical bodies doing about this? There were few comprehensive reference works and look-up facilities were primitive. ("How do you know what to look up when you don't know what you are looking for!" was the blunt but pragmatic cry).Delegates were informed that the IIW was preparing a table of internationally-accepted symbols for the operation of welding equipment. This was currently before ISO and the rate of progress was very slow. The IEC had the largest collection in the world of symbols for use in circuit diagrams and for use on equipment such as a symbol for 'press', and a series of international symbols for use in railway stations, airports and traffic signs is being developed. However, it was felt that symbols would always be ambiguous and that the international community should decide to use one natural language and adopt selected terms from that language. It was agreed that access to published lists of graphics and symbols was difficult: in the United Kingdom, BSI was one of the best sources of information.Later in the discussion, delegates returned to the very long lead times required for terms to emerge from the various committees and subcommittees and, with reference to the Israeli example, whether translators and writers were represented. Experts could be called on to distil current usage but the results of their deliberations needed to be exposed to public comment if terms were to find general acceptance. Delegates were advised that their help and cooperation was actively sought and that they should contact the appropriate standards body or technical umbrella organisation if they felt they had expertise in a specific subject area. | null | null | null | null | Main paper:
:
In view of the high cost involved in publishing multilingual editions of glossaries such as the very useful welding glossaries, it was suggested that translators would favour the production of bilingual and multi-lingual editions on microfiches. Microfiches could be produced quite cheaply* and provide compact storage; furthermore, updated versions could be sent to subscribers at regular intervals. The International Institute of Welding was examining a number of alternatives to hard copy publication and microfiche was one of the options; however, no final decision had been taken. Despite their advantages, there was still user resistance to microfiches, while the issuing and filing of updates required additional effort on the part of both publisher and subscriber.It was suggested that since welding was essentially a craft activity its terminology would have tended to be rather parochial, with significant regional variations, and that there would be considerable difficulties in establishing a consolidated international vocabulary. In fact, it was the growing importance of welding as an industrial process, for such varied products as atomic reactors and household gadgets, which had prompted the IIW to create a multilingual terminology for use by the international community. While regional variations did persist and usage could vary, as between American and British English, or French, Swiss and Belgian French, national standards bodies were endeavouring to 'iron out' disparities at national level and encourage the use of a more homogeneous vocabulary.Great interest was shown in the work being carried out in Israel for creating new terms. Delegates were referred to the Encyclopaedia Judaeica for a detailed description of the history and development of modern Hebrew, and for the principles of Hebrew lexicology. The Academy of the Hebrew Language operated a system similar to that of BSI with regard to standardisation of terminology. The Academy did not initiate work on new dictionaries and word lists but responded to requests from the press, from industry and from academics. These professionals were the first non-native speakers of Hebrew and were inevitably influenced by their mother tongue when faced with the need to create new Hebrew words in their sphere of activity. This could result in the creation of a number of synonymous calques within a single industry or university department.On receiving a request or enquiry, the Academy would consult the appropriate subject expert. Where no preferred Hebrew term existed, a committee of authors and linguists would solicit suggestions. These would be circulated amongst the subject experts and submitted to a grammar committee. A full plenary session of the Academy, comprising authors, linguists and translators (23 in all) would then be called to approve or reject the term. If no consensus were reached, the matter would be referred back to the original committee which would repeat the consultation process. Once approved, a term would be published in the Academy's official gazette. Only government departments were legally obliged to use the term. Terms became part of the language if they found general acceptance amongst the general public and the Academy eagerly sought public reaction and comment. It was explained that Arabic neologisms were generally unacceptable under the rules of modern Hebrew lexicology, even though both languages shared the three-consonant root structure and similar word-building patterns. There was little control in the Arabic countries over the entry of foreign words into the language. Furthermore, Arabic itself was not a uniform language but varied from Baghdad to Damascus to Cairo. Doubt was expressed as to the usefulness of glossaries and terminology standards as currently conceived by their compilers and publishers. Dictionaries based on standard vocabularies tended to leave out words which were in common use and were too narrow in their scope. The United Nations was cited as an example of a multilingual environment where many delegates were non-native speakers of the official language who tended to use a varied and non-standard vocabulary. It was suggested that more standards organisations should adopt the practice of including lists of non-recommended terms (termes déconseillés), which would be marked as such.While it was true that multilingual vocabulary lists were not generally held in high esteem by translators, they could be helpful in identifying the source of calques devised by non-native speakers of a language. The general feeling was that publishers and compilers of standard vocabularies and special word lists should pay more attention to the expectations translators had of such tools.The perennial chestnut was raised of the need for some formal body to control the use of English, in the form of an Academy for the Advancement of English. Such an Academy would protect and refine the language and lay down proscribed and prescribed forms. Translators with their long history and experience in the use of language should play a leading role in such a body. A straw poll of conference delegates showed that there was little support for such a body. The panel considered that the notion though an ideal one was not feasible, for a variety of reasons. English, like Arabic, existed in several different forms all over the world and an Academy would have to permit the legitimate 'big' variants from North America and Australia as well as the large number of pidgin languages based on English.Technical vocabulary tended to suffer wherever officialdom tried to interfere. French was cited as an example of a language where engineers and technicians used one common, well-known word while government officials used long and little-known circumlocutions. Thus parallel vocabularies tended to emerge. The problem was particularly acute on international committees where technical experts and official government representatives would use two words for the same thing.Lastly, it was suggested that speakers and users of English were simply not susceptible to formal controls.In spite of the conference title "Translating and the Computer", or rather because of it, it was considered surprising that the first three papers in Session 1 had scarcely mentioned computers, either in relation to the production of glossaries and dictionaries* or as regards translators' access to them. The implication seemed to be that conventional printed copy continued to offer the best access to terminology even where this was held on a term bank and that access via the computer terminal was cumbersome and expensive. It was pointed out that computer-stored terminology provided the raw material from which reasonably-priced tailor-made printed requirements could be produced.Dr Yannai would shortly be provided with a desk-top terminal. At present he searches a dictionary but as 5,000 terms are added every six months it will be difficult to search printout until computer techniques have improved. One suggestion put forward to explain why computers were not yet universally used for glossary circulation and production was that this was still a problem of scale. Given the cost of the initial installation, conventional production and publication methods continue to be cost-effective for relatively small and highly-specialised glossaries.The time would shortly be arriving when computers would offer the only viable means for storing, controlling and updating the explosion of technical vocabulary in all fields and in all languages. It was true that retrieval interrogation techniques for term banks needed to be improved.** Concern was also expressed at the apparent duplication of effort at national and international levels both as regards the creation of technical vocabularies and the development of term banks. If users could not find a product to satisfy their immediate needs they would tend to provide their own solution.** On the whole the panel felt that as data banks and networks became more commonplace and offered cheaper tariffs, the trend would be away from the wealth of printed dictionaries to computer stored terminology. * Note: (Rapporteuse): the IIW, IEC and ice glossaries ** This theme was taken up again in the discussion following Session 5.Turning from words to pictures, a delegate expressed concern at the lack of international standards for graphical symbols and schematics. What, if anything, were the international and technical bodies doing about this? There were few comprehensive reference works and look-up facilities were primitive. ("How do you know what to look up when you don't know what you are looking for!" was the blunt but pragmatic cry).Delegates were informed that the IIW was preparing a table of internationally-accepted symbols for the operation of welding equipment. This was currently before ISO and the rate of progress was very slow. The IEC had the largest collection in the world of symbols for use in circuit diagrams and for use on equipment such as a symbol for 'press', and a series of international symbols for use in railway stations, airports and traffic signs is being developed. However, it was felt that symbols would always be ambiguous and that the international community should decide to use one natural language and adopt selected terms from that language. It was agreed that access to published lists of graphics and symbols was difficult: in the United Kingdom, BSI was one of the best sources of information.Later in the discussion, delegates returned to the very long lead times required for terms to emerge from the various committees and subcommittees and, with reference to the Israeli example, whether translators and writers were represented. Experts could be called on to distil current usage but the results of their deliberations needed to be exposed to public comment if terms were to find general acceptance. Delegates were advised that their help and cooperation was actively sought and that they should contact the appropriate standards body or technical umbrella organisation if they felt they had expertise in a specific subject area.
Appendix:
| null | null | null | null | {
"paperhash": [],
"title": [],
"abstract": [],
"authors": [],
"arxiv_id": [],
"s2_corpus_id": [],
"intents": [],
"isInfluential": []
} | null | 507 | 0 | null | null | null | null | null | null | null | null |
3e3bfa62ade298272a0e5259dd86aee097291003 | 237295788 | null | Session 5: Creating Term Banks. Summary of discussion | Delegates were principally concerned at the lack of compatibility between input, terminals, and data storage media (disks and diskettes). Communication between wordprocessors from the same stable could be highly problematic. It was felt that, as small users, translators and translation agencies did not have enough economic clout to insist that their special needs be catered for. They needed to combine forces in order to get their views across to manufacturers. | {
"name": [
"Mayorcas-Cohen, Pamela"
],
"affiliation": [
null
]
} | null | null | Proceedings of Translating and the Computer: Term banks for tomorrow{'}s world | 1982-11-01 | 0 | 0 | null | null | null | null | null | The problems of incompatibility and transmission difficulties were acknowledged by the big manufacturers who were, nevertheless, somewhat handicapped in providing suitable solutions. Some comfort could be gained from the knowledge that small users were not alone in being bewildered by the range and variety of equipment available, and hampered by problems of equipment interconnection and communication. These arose from three main sources:-firstly, the national PTTs were responsible for providing the telecommunications network. The present network was based on voice (analog signal) transmission lines which were unsuitable for transmission of data (digital signals). However, dedicated lines for the transmission of data services were gradually being installed which would improve inter-machine communications as well as access to data networks; -secondly, equipment incompatibility and non-standard initialising systems derived partly from the current state-of-the-art of character conversion codes. Machines could only talk to one another if they recognised the same codes, but there were currently three "standards" in use.* Whilst most big computers and systems used one of the three and could convert between codes, many wordprocessors especially those at the lower end of the price scale, did not offer this facility. It was unlikely that the big manufacturers would change their use of a particular standard, so that improved interconversion systems would need to be developed; -a third factor was that, as yet, no international standard had been agreed for character sets. ISO had been grappling with the problem for some years but no satisfactory solution was in sight.The only practical advice that could be given to the small user was to start simply, and seek advice from others with experience of both equipment and suppliers.There was a lack of consensus as to whether translators did or did not require terminology in areas outside their special field. Some took the view that since one knew one's own terms and had no need of others, access to sophisticated and expensive term banks would serve little purpose. Others were of the view that term banks would be particularly helpful for the odd terms outside one's normal field which tended, inevitably, to creep into the most specialised texts. Access to a single and central terminology store would be especially useful for translators unfamiliar with the available printed sources.It was generally agreed that a universal, all-embracing terminology data bank was both unrealistic and impractical. The real need was for small, specialised term banks for individual users or groups of users, with large back-up files for general terms and terms outside the user's particular speciality. This would represent a translation in computer terms of the linguist's bookshelf which generally contained a large number of small, highly-specialised dictionaries and glossaries, and a smaller number of large, general dictionaries.It was also suggested that term bank producers and users should be able to isolate appropriate sections of a bank, either to use as a self-contained collection (e.g. subject-related glossaries) or to build on in order to satisfy a particular requirement (e.g. a long-term or team project).[This was standard practice amongst users of bibliographic data bases who purchase sections of a data base to run on their own information systems and amplify if necessary (Rapporteuse)].Delegates openly admitted that translators seek the quickest and easiest source of information, and even neglect the services of terminologists who are employed to help them. Thus, it seemed likely that term banks and their contents would not be accepted as a viable and valid tool until each translator could have his own terminal, on his own desk.Turning to the organisation of data within a term bank, delegates learnt that no satisfactory solution had been found for producing a common subject-coding scheme. All the large term banks had evolved their own subject codes and schemes. These were designed for a specific category of user and hence were not compatible with or transferable to other terminology collections. The World Bank had followed this anarchic tendency although it had tried to remain within the UN family and devise a scheme which would be compatible with the other UN agencies.There was also concern at the duplication of effort, nationally and internationally as regards both the development of computerised terminology stores and work on special subject glossaries. The technical problems of access to an existing bank, combined with the context-specific nature of many banks, tended to encourage firms and organisations with the necessary means to start up their own bank. Two developments were awaited: improved compatibility and standardisation of date-entry formats for terminological records and software compatibility to facilitate the exchange of data between banks.The recurring theme was "We have the technology -but we don't know how to use it". Essentially users, in this case translators and terminologists, should decide what they expected of systems and establish clear guidelines for the collection, preparation and retrieval of terminological data. The user community needed to identify itself and define its requirements before it could be more aggressive in conveying its needs to equipment manufacturers and to those bodies who were in a position to mount term bank projects. | Main paper:
:
The problems of incompatibility and transmission difficulties were acknowledged by the big manufacturers who were, nevertheless, somewhat handicapped in providing suitable solutions. Some comfort could be gained from the knowledge that small users were not alone in being bewildered by the range and variety of equipment available, and hampered by problems of equipment interconnection and communication. These arose from three main sources:-firstly, the national PTTs were responsible for providing the telecommunications network. The present network was based on voice (analog signal) transmission lines which were unsuitable for transmission of data (digital signals). However, dedicated lines for the transmission of data services were gradually being installed which would improve inter-machine communications as well as access to data networks; -secondly, equipment incompatibility and non-standard initialising systems derived partly from the current state-of-the-art of character conversion codes. Machines could only talk to one another if they recognised the same codes, but there were currently three "standards" in use.* Whilst most big computers and systems used one of the three and could convert between codes, many wordprocessors especially those at the lower end of the price scale, did not offer this facility. It was unlikely that the big manufacturers would change their use of a particular standard, so that improved interconversion systems would need to be developed; -a third factor was that, as yet, no international standard had been agreed for character sets. ISO had been grappling with the problem for some years but no satisfactory solution was in sight.The only practical advice that could be given to the small user was to start simply, and seek advice from others with experience of both equipment and suppliers.There was a lack of consensus as to whether translators did or did not require terminology in areas outside their special field. Some took the view that since one knew one's own terms and had no need of others, access to sophisticated and expensive term banks would serve little purpose. Others were of the view that term banks would be particularly helpful for the odd terms outside one's normal field which tended, inevitably, to creep into the most specialised texts. Access to a single and central terminology store would be especially useful for translators unfamiliar with the available printed sources.It was generally agreed that a universal, all-embracing terminology data bank was both unrealistic and impractical. The real need was for small, specialised term banks for individual users or groups of users, with large back-up files for general terms and terms outside the user's particular speciality. This would represent a translation in computer terms of the linguist's bookshelf which generally contained a large number of small, highly-specialised dictionaries and glossaries, and a smaller number of large, general dictionaries.It was also suggested that term bank producers and users should be able to isolate appropriate sections of a bank, either to use as a self-contained collection (e.g. subject-related glossaries) or to build on in order to satisfy a particular requirement (e.g. a long-term or team project).[This was standard practice amongst users of bibliographic data bases who purchase sections of a data base to run on their own information systems and amplify if necessary (Rapporteuse)].Delegates openly admitted that translators seek the quickest and easiest source of information, and even neglect the services of terminologists who are employed to help them. Thus, it seemed likely that term banks and their contents would not be accepted as a viable and valid tool until each translator could have his own terminal, on his own desk.Turning to the organisation of data within a term bank, delegates learnt that no satisfactory solution had been found for producing a common subject-coding scheme. All the large term banks had evolved their own subject codes and schemes. These were designed for a specific category of user and hence were not compatible with or transferable to other terminology collections. The World Bank had followed this anarchic tendency although it had tried to remain within the UN family and devise a scheme which would be compatible with the other UN agencies.There was also concern at the duplication of effort, nationally and internationally as regards both the development of computerised terminology stores and work on special subject glossaries. The technical problems of access to an existing bank, combined with the context-specific nature of many banks, tended to encourage firms and organisations with the necessary means to start up their own bank. Two developments were awaited: improved compatibility and standardisation of date-entry formats for terminological records and software compatibility to facilitate the exchange of data between banks.The recurring theme was "We have the technology -but we don't know how to use it". Essentially users, in this case translators and terminologists, should decide what they expected of systems and establish clear guidelines for the collection, preparation and retrieval of terminological data. The user community needed to identify itself and define its requirements before it could be more aggressive in conveying its needs to equipment manufacturers and to those bodies who were in a position to mount term bank projects.
Appendix:
| null | null | null | null | {
"paperhash": [],
"title": [],
"abstract": [],
"authors": [],
"arxiv_id": [],
"s2_corpus_id": [],
"intents": [],
"isInfluential": []
} | null | 507 | 0 | null | null | null | null | null | null | null | null |
cbb1980b3c6c0db3f01446b77002531bca5945e4 | 237295814 | null | Training terminologists for term banks | A general classification of terminologists according to training and background, followed by a brief survey of terminology training programmes up to the present time, and the various different backgrounds of those who participate in them. A review of the central content of general terminology training programme with especial reference to the projected content of training programmes offered to term bank terminologists. Conclusions. | {
"name": [
"Picht, Heribert"
],
"affiliation": [
null
]
} | null | null | Proceedings of Translating and the Computer: Term banks for tomorrow{'}s world | 1982-11-01 | 8 | 1 | null | In the course of the last few years, the professional profile of the terminologist has undergone a process of clarification. Yet even today there is no uniform conception of who can be described as a terminologist and what specific functions he is expected to perform. This state of affairs is evidenced, for example, by a comparison of descriptions of the professional tasks of those employed in the field of terminology by various organisations and institutions.One may, however, distinguish 2 major categories(l), which differ in respect of their professional training:-The technical or other expert engaged in terminology work, chiefly within his own field.The LSP (Language for Special Purposes) trained translator, who concerns himself mainly with translation-oriented terminology work, which is necessarily multilingual.Where any training of terminology workers whatsoever has taken place, this generally has had a pronounced practical bias and has been tailored to the needs of a particular institution.Usually, no basic theory has been offered. This applies both to many terminologists in the terminology departments of language services and to standardisations.Where, however, terminology is embedded in a language study programme -principally LSP studies -a more systematic approach is observed. Here, though, particular aspects which serve the purposes of the particular subject fields, are given prominence.In the early and middle '70's, the question constantly arose of what was to be taught and to what level, in order to provide training with a sufficient range to equip the trainee with a valid foundation for his multi-faceted work.Today, terminology training is generally thought of as a supplementary discipline, which is generally coupled with LSP studies, in some cases also with general language studies. There are, however, an increasing number of plans afoot to expand training programmes to include instruction in the specialist disciplines.These efforts have led to clearer outlining of the contours for projected training. In several countries, there are already plans to provide terminology training as a subsidiary course,combined with a major degree course.The answer to this question must be sought in various factors which cannot all be examined here. But one or two deserve a mention.1. Terminology is a relatively young discipline, which borders on various other disciplines. None of the established disciplines could on its own fulfil the functions of terminology. A process of integration -also within the field of theory -was therefore essential. Each of these groups manifest gaps in their terminological knowledge which require filling in, if we are to obtain a reasonably homogeneous professional profile of the terminologist.As far as I am aware, training programmes(s) include the following subfields:1. In many cases the framework of terminology is mapped out through an introduction to LSP and its nature. Introduction to the various different interpretations of "concept"; this section frequently includes a brief insight into the philosophical principles of the concept as such.-Analysis of the concept, i.e., identification of characteristics, types and theirclassification, as far as possible, and the correlation ofcharacteristics within the concept.-Definition theory; this section is chiefly concerned with the formal composition of a definition and the requirements it must satisfy with regard to certain essential aspects, for example, for whom the definition is intended.-The relationship prevalent between the various concepts of any special field, with particular reference to the types of conceptual relationships and their correlation within a system of concepts. In this connection the attention should be drawn to the aims of the conceptual system, and also to the various different types which are often closely related to the subject field.In the case of multilingual, contrasting work, the demand for the determination of equivalence is of the greatest importance; this aspect should therefore be accorded considerable scope: while in the field of technology, equivalence may be assessed with comparative ease, in the "soft sciences" such assessment is problematical and has so far been insufficiently studied.4. Term Formation. The content of the term has been dealt with under the conceptual aspect. The concern of the present section is to indicate the relationship between expression and content. For this purpose, the "familiar" models are employed. The following points are also discussed:-Term versus Word.-What universally applicable requirements may be stipulated for terms (Question of motivation)?-What particular problems should be considered? Chiefly concerning language or special field-specific questions to do with the formation of terms.-Synonymy, polysemy, homonymy within the field of terminology.5. Lexicography. In this part of the training programme an analysis of the existing forms of dictionaries is generally undertaken; concluding not infrequently in a critical review entailing the differentiation between common core or general dictionaries and LSP dictionaries. In an increasing number of cases, an introduction is offered to the modern aids of the lexicographer i.e. computerassisted lexicography is mentioned, with a summary of its potential and limitations. At many institutions, this training is limited to an introduction since is it only in a very few cases possible to demonstrate the operation of a terminological data bank. However, it should be borne in mind that this section in particular will witness many changes in the course of the next few yearsnecessarily so, in view of the rapidly increasing importance of this aspect. 6. Documentation. It is not, in fact, possible to offer a short course in documentational training, but in most cases the basic elements of documentation are presented, with particular attention being paid to points which have a bearing on the theory of terminology.7. Apart from these points, instruction is offered in the historical development of the theory of terminology, standardisation, language planning, and the organisation of work in terminology on a national or international scale.8. It has proved necessary for paedagogical reasons to supplement the theoretical sections with practical exercises and to conclude the training programme with a terminology project which seeks to touch on all aspects of the training.These remarks, broadly speaking, cover the first part of my theme, and it should be noted before we proceed, that this part is fundamental; without it, it would scarcely be possible for any truly valuable work in the field of terminology to be performed. However, let us now consider the training of terminologists within the framework of a term bank.The training programme outlined in this section presupposes a basic knowledge of terminology which corresponds roughly to that which I have just mentioned. It should, furthermore, be borne in mind that there are two aspects in this training: both a knowledge of term banks in general and the specific knowledge of, and familiarity with, one term bank in particular. In connection with this aspect, all the following points should be considered and included in this special part of the training programme.and purpose. At this stage, in one or two typical cases, the origins and development should be briefly presented, since a complete review proves thought-provoking and stimulating for the learner.2. A thorough introduction to the system of the term banks for which work is to be undertaken upon completion of training. Here, it is essential that the basic conception, the purpose and the technical possibilities of the system are made quite clear.3. An essential feature is the complete command of the record, and not that of the "home" bank alone, but also the records of those other banks with which an exchange of terminological data is carried out (see also point 6).4. Of equally great importance is a command of the relevant system of classification. This means besides the ability to operate a system mechanically, the ability to carry out unaided, complex tasks of classification in accordance with the system. To achieve this, familiarity with the conception of the classification system is imperative. A command of the classification systems of exchange partners is, likewise, indicated.5. It appears almost banal to stipulate that the technical operation of the bank should be mastered faultlessly, not only -and this is essential -as a user of the bank, but also as a termbank specialist; that is to say, the potential and limitations of the bank's functions should be known in order to permit the use of the bank already during the stage of collation of terminologies. In other words, the bank should also serve as an instrument for processing.6. The exchange of terminological inventories sounds -especially in theory -extremely simple. However, in practice, matters are somewhat different. In this section such questions as: unaltered transfer? Adaptation by machine processing (where at all possible)? Supplementation, breakdown or summarising of individual items of information -in short: adaptation to the record of the "home" bank -should be thoroughly discussed.7, An operation which will be of great importance in the future is the maintenance, renewal and continuous checking of inventories. Many banks have, for a wide variety of reasons, allowed their enthusiasm to run away with them, and have ended up storing vast quantities of terminological information despite the fact that its quality was known to be far from satisfactory in many cases. How such "polluted" inventories are to be purified -even when this is feasible -or how a new processing operation can best succeed, should in any event be the object of study during training. There are, after all, a number of unfortunate examples to act as an object lesson.8. Lastly, research-related aspects of a term bank should be considered. These might include, for instance, various areas such as the further development of the bank, the study of term formation, the analysis of definition, etc.In conclusion, it may be stated that term bank workers should possess a) training in terminology as an essential basis for terminology work, irrespective of whether traditional or electronic lexicographical methods are used b) term bank-specific knowledge, which 1. is of general character and applicable to general matters 2. refers to the "home" bank and may only be acquired by working in association with it 3. includes all banks operating in a linked file system.This division has the practical and paedagogical consequence that points a) and, to a certain extent, point b)1, can be learnt without access to a term bank; points b)2 and b)3, however, necessitate intensive practical training which can hardly be simulated and therefore ought only to take place in conjunction with a term bank.The principal difference in training terminologists for term banks and terminologists for other purposes, lies not in the area of the theory, but rather in the area of the technical means and methods of LSP lexicography, sometimes known as terminography.. This is supplemented by a definition of the aims of terminology theory and a classification of the object to which it applies.3. The heading theory of concepts covers the following principal subsections: | null | null | null | null | Main paper:
a survey of all the various types of term banks and their function:
and purpose. At this stage, in one or two typical cases, the origins and development should be briefly presented, since a complete review proves thought-provoking and stimulating for the learner.2. A thorough introduction to the system of the term banks for which work is to be undertaken upon completion of training. Here, it is essential that the basic conception, the purpose and the technical possibilities of the system are made quite clear.3. An essential feature is the complete command of the record, and not that of the "home" bank alone, but also the records of those other banks with which an exchange of terminological data is carried out (see also point 6).4. Of equally great importance is a command of the relevant system of classification. This means besides the ability to operate a system mechanically, the ability to carry out unaided, complex tasks of classification in accordance with the system. To achieve this, familiarity with the conception of the classification system is imperative. A command of the classification systems of exchange partners is, likewise, indicated.5. It appears almost banal to stipulate that the technical operation of the bank should be mastered faultlessly, not only -and this is essential -as a user of the bank, but also as a termbank specialist; that is to say, the potential and limitations of the bank's functions should be known in order to permit the use of the bank already during the stage of collation of terminologies. In other words, the bank should also serve as an instrument for processing.6. The exchange of terminological inventories sounds -especially in theory -extremely simple. However, in practice, matters are somewhat different. In this section such questions as: unaltered transfer? Adaptation by machine processing (where at all possible)? Supplementation, breakdown or summarising of individual items of information -in short: adaptation to the record of the "home" bank -should be thoroughly discussed.7, An operation which will be of great importance in the future is the maintenance, renewal and continuous checking of inventories. Many banks have, for a wide variety of reasons, allowed their enthusiasm to run away with them, and have ended up storing vast quantities of terminological information despite the fact that its quality was known to be far from satisfactory in many cases. How such "polluted" inventories are to be purified -even when this is feasible -or how a new processing operation can best succeed, should in any event be the object of study during training. There are, after all, a number of unfortunate examples to act as an object lesson.8. Lastly, research-related aspects of a term bank should be considered. These might include, for instance, various areas such as the further development of the bank, the study of term formation, the analysis of definition, etc.In conclusion, it may be stated that term bank workers should possess a) training in terminology as an essential basis for terminology work, irrespective of whether traditional or electronic lexicographical methods are used b) term bank-specific knowledge, which 1. is of general character and applicable to general matters 2. refers to the "home" bank and may only be acquired by working in association with it 3. includes all banks operating in a linked file system.This division has the practical and paedagogical consequence that points a) and, to a certain extent, point b)1, can be learnt without access to a term bank; points b)2 and b)3, however, necessitate intensive practical training which can hardly be simulated and therefore ought only to take place in conjunction with a term bank.The principal difference in training terminologists for term banks and terminologists for other purposes, lies not in the area of the theory, but rather in the area of the technical means and methods of LSP lexicography, sometimes known as terminography.. This is supplemented by a definition of the aims of terminology theory and a classification of the object to which it applies.3. The heading theory of concepts covers the following principal subsections:
introduction:
In the course of the last few years, the professional profile of the terminologist has undergone a process of clarification. Yet even today there is no uniform conception of who can be described as a terminologist and what specific functions he is expected to perform. This state of affairs is evidenced, for example, by a comparison of descriptions of the professional tasks of those employed in the field of terminology by various organisations and institutions.One may, however, distinguish 2 major categories(l), which differ in respect of their professional training:-The technical or other expert engaged in terminology work, chiefly within his own field.The LSP (Language for Special Purposes) trained translator, who concerns himself mainly with translation-oriented terminology work, which is necessarily multilingual.Where any training of terminology workers whatsoever has taken place, this generally has had a pronounced practical bias and has been tailored to the needs of a particular institution.Usually, no basic theory has been offered. This applies both to many terminologists in the terminology departments of language services and to standardisations.Where, however, terminology is embedded in a language study programme -principally LSP studies -a more systematic approach is observed. Here, though, particular aspects which serve the purposes of the particular subject fields, are given prominence.In the early and middle '70's, the question constantly arose of what was to be taught and to what level, in order to provide training with a sufficient range to equip the trainee with a valid foundation for his multi-faceted work.Today, terminology training is generally thought of as a supplementary discipline, which is generally coupled with LSP studies, in some cases also with general language studies. There are, however, an increasing number of plans afoot to expand training programmes to include instruction in the specialist disciplines.These efforts have led to clearer outlining of the contours for projected training. In several countries, there are already plans to provide terminology training as a subsidiary course,combined with a major degree course.The answer to this question must be sought in various factors which cannot all be examined here. But one or two deserve a mention.1. Terminology is a relatively young discipline, which borders on various other disciplines. None of the established disciplines could on its own fulfil the functions of terminology. A process of integration -also within the field of theory -was therefore essential. Each of these groups manifest gaps in their terminological knowledge which require filling in, if we are to obtain a reasonably homogeneous professional profile of the terminologist.As far as I am aware, training programmes(s) include the following subfields:1. In many cases the framework of terminology is mapped out through an introduction to LSP and its nature. Introduction to the various different interpretations of "concept"; this section frequently includes a brief insight into the philosophical principles of the concept as such.-Analysis of the concept, i.e., identification of characteristics, types and theirclassification, as far as possible, and the correlation ofcharacteristics within the concept.-Definition theory; this section is chiefly concerned with the formal composition of a definition and the requirements it must satisfy with regard to certain essential aspects, for example, for whom the definition is intended.-The relationship prevalent between the various concepts of any special field, with particular reference to the types of conceptual relationships and their correlation within a system of concepts. In this connection the attention should be drawn to the aims of the conceptual system, and also to the various different types which are often closely related to the subject field.In the case of multilingual, contrasting work, the demand for the determination of equivalence is of the greatest importance; this aspect should therefore be accorded considerable scope: while in the field of technology, equivalence may be assessed with comparative ease, in the "soft sciences" such assessment is problematical and has so far been insufficiently studied.4. Term Formation. The content of the term has been dealt with under the conceptual aspect. The concern of the present section is to indicate the relationship between expression and content. For this purpose, the "familiar" models are employed. The following points are also discussed:-Term versus Word.-What universally applicable requirements may be stipulated for terms (Question of motivation)?-What particular problems should be considered? Chiefly concerning language or special field-specific questions to do with the formation of terms.-Synonymy, polysemy, homonymy within the field of terminology.5. Lexicography. In this part of the training programme an analysis of the existing forms of dictionaries is generally undertaken; concluding not infrequently in a critical review entailing the differentiation between common core or general dictionaries and LSP dictionaries. In an increasing number of cases, an introduction is offered to the modern aids of the lexicographer i.e. computerassisted lexicography is mentioned, with a summary of its potential and limitations. At many institutions, this training is limited to an introduction since is it only in a very few cases possible to demonstrate the operation of a terminological data bank. However, it should be borne in mind that this section in particular will witness many changes in the course of the next few yearsnecessarily so, in view of the rapidly increasing importance of this aspect. 6. Documentation. It is not, in fact, possible to offer a short course in documentational training, but in most cases the basic elements of documentation are presented, with particular attention being paid to points which have a bearing on the theory of terminology.7. Apart from these points, instruction is offered in the historical development of the theory of terminology, standardisation, language planning, and the organisation of work in terminology on a national or international scale.8. It has proved necessary for paedagogical reasons to supplement the theoretical sections with practical exercises and to conclude the training programme with a terminology project which seeks to touch on all aspects of the training.These remarks, broadly speaking, cover the first part of my theme, and it should be noted before we proceed, that this part is fundamental; without it, it would scarcely be possible for any truly valuable work in the field of terminology to be performed. However, let us now consider the training of terminologists within the framework of a term bank.The training programme outlined in this section presupposes a basic knowledge of terminology which corresponds roughly to that which I have just mentioned. It should, furthermore, be borne in mind that there are two aspects in this training: both a knowledge of term banks in general and the specific knowledge of, and familiarity with, one term bank in particular. In connection with this aspect, all the following points should be considered and included in this special part of the training programme.
Appendix:
| null | null | null | null | {
"paperhash": [],
"title": [],
"abstract": [],
"authors": [],
"arxiv_id": [],
"s2_corpus_id": [],
"intents": [],
"isInfluential": []
} | null | 507 | 0.001972 | null | null | null | null | null | null | null | null |
81165b88e2f7f79a8970d04e00f8c00a9de57789 | 237295775 | null | Welding terminology in 18 languages | ORIGINS OF THE WORK Before describing the compilation of this terminology, it is necessary to sketch in the background against which the work has been carried out. The Multilingual Collection of Terms for Welding and Allied Processes is one of the achievements of the International Institute of Welding which was founded in 1948 with an initial membership of welding institutes or other appropriate non-commercial organisations, such as university departments, from 13 countries. Membership increased fairly rapidly and, by 1957, there were 25 member countries while, during the next 10 years or so, membership increased to nearly the present level of 37 member countries. However, the Collection of Terms is confined to European languages and, in particular, the Chinese and Japanese delegations have not found it possible to include their languages in the Collection. | {
"name": [
"Boyd, P. D."
],
"affiliation": [
null
]
} | null | null | Proceedings of Translating and the Computer: Term banks for tomorrow{'}s world | 1982-11-01 | 0 | 0 | null | On the foundation of the IIW, the decision was taken to work through permanent Commissions devoted to different aspects of welding technology and on which each country could be represented by a delegate assisted by experts. There were initially 12 Commissions, one of which (Commission VI) was entitled "Terminology" and given the task of preparing a Multilingual Collection of Terms to make possible accurate translations in a highly specialised area of technology and thus facilitate the exchange of information.The intention was and is to restrict the Collection of Terms to those specifically related to welding and to exclude terms drawn from other disciplines such as engineering and metallurgy, even though those inevitably figure very prominently in welding literature.This decision has meant that the scheme of the Collection comprises one general section, covering welding procedures and the characteristics and inspection of welds, and, so far, 8 other sections devoted to the various welding and allied processes, together with the welding of plastic where a special terminology has evolved which is not applicable to the welding of metals.The first section, devoted to gas welding, was published in 1953 and took the form of a slim volume of 78 pages covering some 350 concepts in 10 languages. It is the first section to have been revised and the new edition has just been published. The advance of the technology in the space of 30 years can be gauged from the fact that the new edition covers some 700 concepts and, with 15 languages included, fills a volume of 293 pages in the same format as the previous edition.An important aspect of the preparation of the Multilingual Collection of Terms is the method of compilation. A list of concepts to be included in each section is prepared by Commission VI. At meetings, it established, with the help of discussions and drawings, the exact correspondence between the terms for these concepts in three basic languages, English, French and German. The term in the three languages is then put on a fiche and the fiches circulated to all members of the Commission who then insert the corresponding term in their own language.On this basis, a methodical list of concepts, divided into chapters, is drawn, up in a logical order, each concept being expressed in all the languages included and having its own number which is, of course, the same for all languages. This list constitutes the first part of each published section of the Collection of Terms.The second part is composed of an alphabetical index for each language where each term is followed by the number of the concept in the methodical index. The user of the terminology looks up the term for which he seeks a translation in the alphabetical index for the language in question and then turns back to the methodical index where he finds the term, together with its equivalents in all the languages included in the section.The terms included are all in current use. If, within a single linguistic group, one country uses terms peculiar to it, they are mentioned, preceded by the appropriate symbol.Whenever, in one language, there are several synonymous terms for a single concept, the standardised term, if any, is placed first, followed by the others in the order of the frequency of their use. Terms defined in ISO, either in a recommendation or in a draft recommendation, are marked with a cross.Up to the present, the Commission has considered that the establishment of agreed definitions for each term and the translation of those into all the languages included would delay the work to an unacceptable degree and vastly increase the size and cost of each section. This has meant that many terms have had to be illustrated by line drawings to provide additional clarification, these drawings being grouped in a separate part of each volume and identified by the number of the corresponding concept in the methodical index. However, now that a relatively complete multilingual terminology of welding exists, it is planned that, as revisions are undertaken, definitions should be included though these may not necessarily be given in every language contained in each section.The question may well be asked how the delegates on Commission VI are competent to draw up complete lists of the terms which must be included in each section and how they are able, with certainty, to establish concordance between the corresponding terms, at least in the three basic languages.Naturally, the delegates on Commission VI are not omniscient but they benefit from the structure of the IIW which has specialist Commissions dealing with, amongst other things, the different welding processes as well as the inspection of welds and the welding of plastics. Each section is drawn up in consultation with the appropriate Commission which may either itself prepare a list of terms which should be included in the relevant section or which, alternatively, checks and comments upon the trilingual fiches prepared by Commission VI. In addition, it is the responsibility of the individual members of Commission VI to discuss the concepts under consideration with appropriate experts in their own country so as to ensure that they fully understand the exact significance of each term and can explain it to their colleagues on the Commission to ensure exact concordance between the different languages.It will be readily understood that in any programme of the kind undertaken by Commission VI the problems are not only technical; they are also financial. In respect of the Multilingual Collection of Terms, the financial problems are of two kinds. The first concerns the work of Commission VI itself; as a plenary Commission, it meets for at least 7 or possibly 10 days a year, while additional meetings are held of Sub-Commissions responsible for the preparatory work, where the burden falls primarily on delegates representative of the English, French and German languages. All this work is carried out entirely at the charge of the respective national delegations and the employers of the delegates: the latter bear the costs of the time spent by the delegates in the work of the Commission and either they or the national delegations underwrite the travelling and subsistence expenses involved in attendance at the meetings of the Commission and of its Sub-Commissions.The second element of costs concerns the publication of the sections and here the IIW, as an organisation, has had to face very difficult problems. In the early days, the Swiss delegation, which is in principle trilingual, very generously undertook the publication of the Collection on behalf of the IIW and published and marketed the first six sections. This was partly made possible by the fact that, up to about 1965, UNESCO was willing, under the terms of fairly strict contracts, to subsidise the publication of Multilingual Terminologies. Accordingly, the IIW was in receipt of subventions which could be used to offset the cost of publication and thus ensure that the sections were marketed at a price below the cost of publication.When these subventions ceased, it became impossible for the Swiss delegation to finance the printing and publication of succeeding sections. Since then, the IIW has adopted a system by which the cost of printing is covered by a levy contributed by the countries whose languages are included in each section of the Collection. The amount of the levy is proportionate to the country's annual subscription to the IIW; in return for the levy, each country receives at a reduced price a number of copies of the section in question, this number being, in turn, proportionate to the amount of money contributed. The prices are so fixed that a substantial number of copies remains after the contributing member countries have received their quotas, these surplus copies being put on sale in countries other than the contributing countries.For a variety of reasons, this system of financing publication has not been entirely satisfactory and the cost of publishing recent sections of the Collection of Terms has in fact been heavily subsidised by the central funds of the IIW. Unfortunately, the IIW is not a body with the capital necessary to act as an international publisher and it is certain that other means will have to be found to finance the publication of the revisions of the existing sections of the Multilingual Collection of Terms.In very recent years, some subventions have again been forthcoming from UNESCO, but it would be surprising if these were sufficient to provide a solution to the problem, particularly if definitions are to be included which will inevitably increase the costs.For this reason, the IIW is beginning to examine the possibility of publication in the form of, for example, trilingual video tapes which could be sold to data bases while national delegations using other languages could prepare their own lists which they could exchange between each other and use in conjunction with the various computer networks.With the present high costs of printing and the limited market for works of reference, consequent upon the introduction of documentation networks, it seems impossible that traditional methods of producing and marketing specialised dictionaries such as the Multilingual Collection of Terms for Welding and Allied Processes can be maintained. It is therefore incumbent upon the IIW to find other solutions which will ensure that the work of Commission VI "Terminology" can be continued and made available to the limited public which needs to consult welding literature in foreign languages. | null | null | null | null | Main paper:
:
On the foundation of the IIW, the decision was taken to work through permanent Commissions devoted to different aspects of welding technology and on which each country could be represented by a delegate assisted by experts. There were initially 12 Commissions, one of which (Commission VI) was entitled "Terminology" and given the task of preparing a Multilingual Collection of Terms to make possible accurate translations in a highly specialised area of technology and thus facilitate the exchange of information.The intention was and is to restrict the Collection of Terms to those specifically related to welding and to exclude terms drawn from other disciplines such as engineering and metallurgy, even though those inevitably figure very prominently in welding literature.This decision has meant that the scheme of the Collection comprises one general section, covering welding procedures and the characteristics and inspection of welds, and, so far, 8 other sections devoted to the various welding and allied processes, together with the welding of plastic where a special terminology has evolved which is not applicable to the welding of metals.The first section, devoted to gas welding, was published in 1953 and took the form of a slim volume of 78 pages covering some 350 concepts in 10 languages. It is the first section to have been revised and the new edition has just been published. The advance of the technology in the space of 30 years can be gauged from the fact that the new edition covers some 700 concepts and, with 15 languages included, fills a volume of 293 pages in the same format as the previous edition.An important aspect of the preparation of the Multilingual Collection of Terms is the method of compilation. A list of concepts to be included in each section is prepared by Commission VI. At meetings, it established, with the help of discussions and drawings, the exact correspondence between the terms for these concepts in three basic languages, English, French and German. The term in the three languages is then put on a fiche and the fiches circulated to all members of the Commission who then insert the corresponding term in their own language.On this basis, a methodical list of concepts, divided into chapters, is drawn, up in a logical order, each concept being expressed in all the languages included and having its own number which is, of course, the same for all languages. This list constitutes the first part of each published section of the Collection of Terms.The second part is composed of an alphabetical index for each language where each term is followed by the number of the concept in the methodical index. The user of the terminology looks up the term for which he seeks a translation in the alphabetical index for the language in question and then turns back to the methodical index where he finds the term, together with its equivalents in all the languages included in the section.The terms included are all in current use. If, within a single linguistic group, one country uses terms peculiar to it, they are mentioned, preceded by the appropriate symbol.Whenever, in one language, there are several synonymous terms for a single concept, the standardised term, if any, is placed first, followed by the others in the order of the frequency of their use. Terms defined in ISO, either in a recommendation or in a draft recommendation, are marked with a cross.Up to the present, the Commission has considered that the establishment of agreed definitions for each term and the translation of those into all the languages included would delay the work to an unacceptable degree and vastly increase the size and cost of each section. This has meant that many terms have had to be illustrated by line drawings to provide additional clarification, these drawings being grouped in a separate part of each volume and identified by the number of the corresponding concept in the methodical index. However, now that a relatively complete multilingual terminology of welding exists, it is planned that, as revisions are undertaken, definitions should be included though these may not necessarily be given in every language contained in each section.The question may well be asked how the delegates on Commission VI are competent to draw up complete lists of the terms which must be included in each section and how they are able, with certainty, to establish concordance between the corresponding terms, at least in the three basic languages.Naturally, the delegates on Commission VI are not omniscient but they benefit from the structure of the IIW which has specialist Commissions dealing with, amongst other things, the different welding processes as well as the inspection of welds and the welding of plastics. Each section is drawn up in consultation with the appropriate Commission which may either itself prepare a list of terms which should be included in the relevant section or which, alternatively, checks and comments upon the trilingual fiches prepared by Commission VI. In addition, it is the responsibility of the individual members of Commission VI to discuss the concepts under consideration with appropriate experts in their own country so as to ensure that they fully understand the exact significance of each term and can explain it to their colleagues on the Commission to ensure exact concordance between the different languages.It will be readily understood that in any programme of the kind undertaken by Commission VI the problems are not only technical; they are also financial. In respect of the Multilingual Collection of Terms, the financial problems are of two kinds. The first concerns the work of Commission VI itself; as a plenary Commission, it meets for at least 7 or possibly 10 days a year, while additional meetings are held of Sub-Commissions responsible for the preparatory work, where the burden falls primarily on delegates representative of the English, French and German languages. All this work is carried out entirely at the charge of the respective national delegations and the employers of the delegates: the latter bear the costs of the time spent by the delegates in the work of the Commission and either they or the national delegations underwrite the travelling and subsistence expenses involved in attendance at the meetings of the Commission and of its Sub-Commissions.The second element of costs concerns the publication of the sections and here the IIW, as an organisation, has had to face very difficult problems. In the early days, the Swiss delegation, which is in principle trilingual, very generously undertook the publication of the Collection on behalf of the IIW and published and marketed the first six sections. This was partly made possible by the fact that, up to about 1965, UNESCO was willing, under the terms of fairly strict contracts, to subsidise the publication of Multilingual Terminologies. Accordingly, the IIW was in receipt of subventions which could be used to offset the cost of publication and thus ensure that the sections were marketed at a price below the cost of publication.When these subventions ceased, it became impossible for the Swiss delegation to finance the printing and publication of succeeding sections. Since then, the IIW has adopted a system by which the cost of printing is covered by a levy contributed by the countries whose languages are included in each section of the Collection. The amount of the levy is proportionate to the country's annual subscription to the IIW; in return for the levy, each country receives at a reduced price a number of copies of the section in question, this number being, in turn, proportionate to the amount of money contributed. The prices are so fixed that a substantial number of copies remains after the contributing member countries have received their quotas, these surplus copies being put on sale in countries other than the contributing countries.For a variety of reasons, this system of financing publication has not been entirely satisfactory and the cost of publishing recent sections of the Collection of Terms has in fact been heavily subsidised by the central funds of the IIW. Unfortunately, the IIW is not a body with the capital necessary to act as an international publisher and it is certain that other means will have to be found to finance the publication of the revisions of the existing sections of the Multilingual Collection of Terms.In very recent years, some subventions have again been forthcoming from UNESCO, but it would be surprising if these were sufficient to provide a solution to the problem, particularly if definitions are to be included which will inevitably increase the costs.For this reason, the IIW is beginning to examine the possibility of publication in the form of, for example, trilingual video tapes which could be sold to data bases while national delegations using other languages could prepare their own lists which they could exchange between each other and use in conjunction with the various computer networks.With the present high costs of printing and the limited market for works of reference, consequent upon the introduction of documentation networks, it seems impossible that traditional methods of producing and marketing specialised dictionaries such as the Multilingual Collection of Terms for Welding and Allied Processes can be maintained. It is therefore incumbent upon the IIW to find other solutions which will ensure that the work of Commission VI "Terminology" can be continued and made available to the limited public which needs to consult welding literature in foreign languages.
Appendix:
| null | null | null | null | {
"paperhash": [],
"title": [],
"abstract": [],
"authors": [],
"arxiv_id": [],
"s2_corpus_id": [],
"intents": [],
"isInfluential": []
} | null | 507 | 0 | null | null | null | null | null | null | null | null |
7edf715a5eba0ed13d01d17060672cff2064408c | 237295800 | null | Terminologists and their setting: the {CEC} experience | At every congress or meeting where translators are together one hears many of them complaining that they feel lonely, that they are isolated with their many problems. | {
"name": [
"Goetschalckx, Jacques"
],
"affiliation": [
null
]
} | null | null | Proceedings of Translating and the Computer: Term banks for tomorrow{'}s world | 1982-11-01 | 0 | 0 | null | In larger translation services this is certainly no longer true. In international organisations such as the United Nations or the European Community, but also on the national level in Canada and in Germany with its Bundessprachenamt, and even in industry -Siemens, Philips, Aerospatiale, Bell Canada -the translator is part of a team, belonging to a specialised department with considerable resources. Even freelance translators have started working on a cooperative basis, especially in Italy and in Germany, to speak only from my own knowledge. They decided to share a secretariat, a library and in Stuttgart also a computer.A translation department in a big organisation is composed of documentalists, terminologists, revisers and translators, with the work organised through a planning section. According to the degree of sophistication the staff make use of text processing equipment, facsimile facilities, documentation databases and terminology banks.This tendency leads to a certain degree of specialisation, with translators becoming documentalists or terminologists. Unfortunately only very few translation schools provide training for these activities in the translation field. As far as I know, this kind of training exists only in Vienna, Copenhagen and Antwerp as far as Europe is concerned, and of course in Quebec and Montreal.Consequently, the training of terminologists in the European Community translation and terminology department is carried out on the job.What is required in order to become a terminologist? Firstly a good knowledge of the target language as well as a sound knowledge of the source languages; secondly, a maniacal desire to find and to use the right term or the appropriate term -which is not always the same -and derive real pleasure from a long and complicated search through the various available sources; and thirdly, checking and rechecking what one has come across with what one has already. A terminologist must have the capacity to associate and combine pieces of information, put them together logically and make sure that they really fit. Furthermore he must have a feeling for technical matters.The consequence of this is that he must have a talent for spotting information or information sources. It presupposes also a lot of psychology to obtain cooperation from documentation centres or experts in the field. Most important of all, maybe, is to let your partner feel that you are ready to give and not only to take.As the training facilities for terminologists are very limited, the EC terminology bureaux apply the rough method of throwing their terminologists in the deep end and making them swim. The best starting point is what we call SVP, which consists of solve the problems raised or submitted by translators. This work provides direct contact with the end user of the information, which enhances the terminologists motivation. It also brings the terminologist into contact with various experts and after some time gives him a feeling of how to handle them. It gives him the opportunity of getting fully acquainted with all the available documents, books and other information sources.The next step could be compilation of thematical glossaries. In this work is it not only necessary to collect solid information and documentation but the terminologist also has to pursue his investigation further to obtain a certain command of the terminology in a given field: mining, steelmaking, occupational health and safety, data processing, etc.The last step in the process is the setting up and development of the EURODICAUTOM terminology bank where the same capabilities are required but where there is also the need for sound knowledge of data processing, and last but not least, the need for managerial skills.The place of the terminologist in the general organisation of the translation department depends on the translators. The role of the terminologist can be limited to making available the required documentation. It can also lead to a joint search through all the information available. The translator can leave the full responsibility for the terminology search with the terminologist if he wants to and this can of course, be very practical when his translation has to be delivered at very short notice. He can also decide to do the whole job himself without making use of the facilities offered.Another task of the terminologist is to initiate translators in the use of the classical card files or more specifically to use EURODICAUTOM and eventually to help them access other databases.Furthermore, the task of the terminologist is to improve the translators' working conditions by compiling, developing or setting up card files, thematical glossaries and work on the EURODICAUTOM terminology bank.I think it should be pointed out, however, that the translator remains fully responsible for his translation whether he does it with or without the help of a terminologist. This brings us back to the training of terminologists for the eighties. I should say that in addition to a good knowledge of languages -in our multilingual situation the passive knowledge of many languages can be very valuable -the terminologist requires a knowledge of data processing which enables him to judge what can be done, and done well, by a computer and what can be done better or more easily by man. Furthermore he should be well versed in documentation techniques and theoretical linguistics. He will in any case need to have a good grasp of human psychology. | null | null | null | null | Main paper:
:
In larger translation services this is certainly no longer true. In international organisations such as the United Nations or the European Community, but also on the national level in Canada and in Germany with its Bundessprachenamt, and even in industry -Siemens, Philips, Aerospatiale, Bell Canada -the translator is part of a team, belonging to a specialised department with considerable resources. Even freelance translators have started working on a cooperative basis, especially in Italy and in Germany, to speak only from my own knowledge. They decided to share a secretariat, a library and in Stuttgart also a computer.A translation department in a big organisation is composed of documentalists, terminologists, revisers and translators, with the work organised through a planning section. According to the degree of sophistication the staff make use of text processing equipment, facsimile facilities, documentation databases and terminology banks.This tendency leads to a certain degree of specialisation, with translators becoming documentalists or terminologists. Unfortunately only very few translation schools provide training for these activities in the translation field. As far as I know, this kind of training exists only in Vienna, Copenhagen and Antwerp as far as Europe is concerned, and of course in Quebec and Montreal.Consequently, the training of terminologists in the European Community translation and terminology department is carried out on the job.What is required in order to become a terminologist? Firstly a good knowledge of the target language as well as a sound knowledge of the source languages; secondly, a maniacal desire to find and to use the right term or the appropriate term -which is not always the same -and derive real pleasure from a long and complicated search through the various available sources; and thirdly, checking and rechecking what one has come across with what one has already. A terminologist must have the capacity to associate and combine pieces of information, put them together logically and make sure that they really fit. Furthermore he must have a feeling for technical matters.The consequence of this is that he must have a talent for spotting information or information sources. It presupposes also a lot of psychology to obtain cooperation from documentation centres or experts in the field. Most important of all, maybe, is to let your partner feel that you are ready to give and not only to take.As the training facilities for terminologists are very limited, the EC terminology bureaux apply the rough method of throwing their terminologists in the deep end and making them swim. The best starting point is what we call SVP, which consists of solve the problems raised or submitted by translators. This work provides direct contact with the end user of the information, which enhances the terminologists motivation. It also brings the terminologist into contact with various experts and after some time gives him a feeling of how to handle them. It gives him the opportunity of getting fully acquainted with all the available documents, books and other information sources.The next step could be compilation of thematical glossaries. In this work is it not only necessary to collect solid information and documentation but the terminologist also has to pursue his investigation further to obtain a certain command of the terminology in a given field: mining, steelmaking, occupational health and safety, data processing, etc.The last step in the process is the setting up and development of the EURODICAUTOM terminology bank where the same capabilities are required but where there is also the need for sound knowledge of data processing, and last but not least, the need for managerial skills.The place of the terminologist in the general organisation of the translation department depends on the translators. The role of the terminologist can be limited to making available the required documentation. It can also lead to a joint search through all the information available. The translator can leave the full responsibility for the terminology search with the terminologist if he wants to and this can of course, be very practical when his translation has to be delivered at very short notice. He can also decide to do the whole job himself without making use of the facilities offered.Another task of the terminologist is to initiate translators in the use of the classical card files or more specifically to use EURODICAUTOM and eventually to help them access other databases.Furthermore, the task of the terminologist is to improve the translators' working conditions by compiling, developing or setting up card files, thematical glossaries and work on the EURODICAUTOM terminology bank.I think it should be pointed out, however, that the translator remains fully responsible for his translation whether he does it with or without the help of a terminologist. This brings us back to the training of terminologists for the eighties. I should say that in addition to a good knowledge of languages -in our multilingual situation the passive knowledge of many languages can be very valuable -the terminologist requires a knowledge of data processing which enables him to judge what can be done, and done well, by a computer and what can be done better or more easily by man. Furthermore he should be well versed in documentation techniques and theoretical linguistics. He will in any case need to have a good grasp of human psychology.
Appendix:
| null | null | null | null | {
"paperhash": [],
"title": [],
"abstract": [],
"authors": [],
"arxiv_id": [],
"s2_corpus_id": [],
"intents": [],
"isInfluential": []
} | null | 507 | 0 | null | null | null | null | null | null | null | null |
1def4859a3633fc97028aa38ea9ee274da78dec1 | 237295793 | null | New terms in a developing language: the {M}alaysian experience | Twenty-five years ago the Malay language could hardly be used to write even a simple science book for schools at secondary level. It was a vernacular language deficient in terminological and lexical specialisation. | {
"name": [
"Bin Abdul Latiff, Abdul Majid"
],
"affiliation": [
null
]
} | null | null | Proceedings of Translating and the Computer: Term banks for tomorrow{'}s world | 1982-11-01 | 0 | 0 | null | Dewan Bahasa dan Pustaka, Kuala Lumpur, Malaysia Twenty-five years ago the Malay language could hardly be used to write even a simple science book for schools at secondary level. It was a vernacular language deficient in terminological and lexical specialisation.Today the situation has completely changed. During the 25 years that have elapsed since Malaya obtained its independence in August 1957, the language has undergone an unprecedented "restoration" process during which its vocabulary was revolutionised and brought up-to-date by the addition of learned terminologies and new words drawn and adapted from English and other languages.It is a process not entirely new to the language, for since very early in its history, Malay has been constantly enriched by contact with other languages, thus raising it above the other local and regional languages to the status of a lingua franca of Southeast Asia.The turning point came when both the people and the government in the country realised the importance of the language in playing the role of a common medium of communication in an independent multi-racial Malaya.This realisation led to the establishment in July 1956 of the Balai Pustaka, a language and literature department under the then Federal Department of Education, Federation of Malaya. In September 1965, the name Balai Pustaka was changed to its present one, Dewan Bahasa dan Pustaka (DBP), (the Dewan). In 1957, a provision in the Federal Constitution stipulated the status of Malay as the nation's national and official language.The responsibility of enriching and promoting the growth of the language was entrusted to the Dewan. In 1959, the Dewan acquired the status of a corporation under the Ministry of Education.The terms of reference defining the Dewan (the Language and Literary Agency) are:1. to develop and enrich the national language; 2. to promote literary talents, especially in the national language (now Bahasa Malaysia); 3. to print or publish or assist the printing or publication of books, magazines, pamphlets, and other forms of literature in the national language as well as in the other languages:4. to standardise the spelling and pronunciation, and to coin appropriate terminologies in the national language; and 5. to compile and publish a national language dictionary. (Monolingual Kamus Dewan published in 1970; Bilingual Kamus Dwibahasa published in 1979).In 1972 Malaysia reached an agreement with Indonesia on a common spelling system, and the Indonesian-Malaysian Language Council (MBIM) was formed on 23rd May 1972. In 1973 attention was drawn towards the standardisation of scientific terms between the two countries.In 1975 the Dewan published "Pedoman Umum Pembentukan Istilah Bahasa Malaysia" (General Guidelines for the Formation of Malay Terminologies) from which is taken much of the material in this paper.In Malaysia a permanent Committee on Bahasa Malaysia has been established -its members include linguists and experts in various disciplines from the five local universities, Ministries and other Government Departments -to be responsible for the standardisation of the national language.It should be emphasised that all the terms are coined by Terminology Committees consisting of experts in their respective fields of knowledge. Most of them are members of the various faculties at the local universities. Most of the Dewan's publications and translations have relied very heavily on the new terms, particularly the specialised ones, coined by the various Terminology Committees.The new terms have been coined according to certain rules which are both linguistically and practically pertinent to the purposes of enriching the vocabulary of the national language.Of course the rules cannot be very rigid because problems do occasionally crop up. Certain minor divergences from the rules are tolerated when necessary.How the new terms are developed I shall try to illustrate with the help of the chart: A Schematic Procedure for the Formation of Terminology (see below).To date the Dewan has in its term bank 250,000 terms from the various disciplines taught at the local universities.To further enhance the development of terminology the Dewan is utilising a computer to expedite the collection, compilation, standardisation, storage and retrieval of terminologies.The minimum target by 1985 is to add another 350,000 terms to make a total of 600,000 terms in view of the fact that beginning in 1983, all first year courses in the local universities will be conducted in Bahasa Malaysia. | null | null | null | null | Main paper:
new terms in a developing language: the malaysian experience abdul majid bin abdul latiff:
Dewan Bahasa dan Pustaka, Kuala Lumpur, Malaysia Twenty-five years ago the Malay language could hardly be used to write even a simple science book for schools at secondary level. It was a vernacular language deficient in terminological and lexical specialisation.Today the situation has completely changed. During the 25 years that have elapsed since Malaya obtained its independence in August 1957, the language has undergone an unprecedented "restoration" process during which its vocabulary was revolutionised and brought up-to-date by the addition of learned terminologies and new words drawn and adapted from English and other languages.It is a process not entirely new to the language, for since very early in its history, Malay has been constantly enriched by contact with other languages, thus raising it above the other local and regional languages to the status of a lingua franca of Southeast Asia.The turning point came when both the people and the government in the country realised the importance of the language in playing the role of a common medium of communication in an independent multi-racial Malaya.This realisation led to the establishment in July 1956 of the Balai Pustaka, a language and literature department under the then Federal Department of Education, Federation of Malaya. In September 1965, the name Balai Pustaka was changed to its present one, Dewan Bahasa dan Pustaka (DBP), (the Dewan). In 1957, a provision in the Federal Constitution stipulated the status of Malay as the nation's national and official language.The responsibility of enriching and promoting the growth of the language was entrusted to the Dewan. In 1959, the Dewan acquired the status of a corporation under the Ministry of Education.The terms of reference defining the Dewan (the Language and Literary Agency) are:1. to develop and enrich the national language; 2. to promote literary talents, especially in the national language (now Bahasa Malaysia); 3. to print or publish or assist the printing or publication of books, magazines, pamphlets, and other forms of literature in the national language as well as in the other languages:4. to standardise the spelling and pronunciation, and to coin appropriate terminologies in the national language; and 5. to compile and publish a national language dictionary. (Monolingual Kamus Dewan published in 1970; Bilingual Kamus Dwibahasa published in 1979).In 1972 Malaysia reached an agreement with Indonesia on a common spelling system, and the Indonesian-Malaysian Language Council (MBIM) was formed on 23rd May 1972. In 1973 attention was drawn towards the standardisation of scientific terms between the two countries.In 1975 the Dewan published "Pedoman Umum Pembentukan Istilah Bahasa Malaysia" (General Guidelines for the Formation of Malay Terminologies) from which is taken much of the material in this paper.In Malaysia a permanent Committee on Bahasa Malaysia has been established -its members include linguists and experts in various disciplines from the five local universities, Ministries and other Government Departments -to be responsible for the standardisation of the national language.It should be emphasised that all the terms are coined by Terminology Committees consisting of experts in their respective fields of knowledge. Most of them are members of the various faculties at the local universities. Most of the Dewan's publications and translations have relied very heavily on the new terms, particularly the specialised ones, coined by the various Terminology Committees.The new terms have been coined according to certain rules which are both linguistically and practically pertinent to the purposes of enriching the vocabulary of the national language.Of course the rules cannot be very rigid because problems do occasionally crop up. Certain minor divergences from the rules are tolerated when necessary.How the new terms are developed I shall try to illustrate with the help of the chart: A Schematic Procedure for the Formation of Terminology (see below).To date the Dewan has in its term bank 250,000 terms from the various disciplines taught at the local universities.To further enhance the development of terminology the Dewan is utilising a computer to expedite the collection, compilation, standardisation, storage and retrieval of terminologies.The minimum target by 1985 is to add another 350,000 terms to make a total of 600,000 terms in view of the fact that beginning in 1983, all first year courses in the local universities will be conducted in Bahasa Malaysia.
Appendix:
| null | null | null | null | {
"paperhash": [],
"title": [],
"abstract": [],
"authors": [],
"arxiv_id": [],
"s2_corpus_id": [],
"intents": [],
"isInfluential": []
} | null | 507 | 0 | null | null | null | null | null | null | null | null |
4dc3136972fa24d194a29aa0b2223f1b9efa03c4 | 237295787 | null | Session 7: Terminology on the Market. Chairman{'}s remarks | Without a growing market, the available term bank facilities will remain limited, but advances in computer technology and the demand for fast access to terminology together will provide the basis for growth of the market for stored data of this kind. | {
"name": [
"Craddock, John"
],
"affiliation": [
null
]
} | null | null | Proceedings of Translating and the Computer: Term banks for tomorrow{'}s world | 1982-11-01 | 0 | 0 | null | null | null | null | null | The market has already been in existence for some years, but its growth is subject to the commercial principles of cost efficiency, to producers for investment in the production of hardware and software, and to consumers for expenditure on these products. At a time of deep recession and high interest rates, the principle is particularly applicable.Put simply, there must be an adequate market for the producer, and the consumer or user of the product must be convinced of the worth to himself of what he is buying, measured against his own experience of existing manual systems. To a degree, these factors are mutually dependent, this dependence is reflected conventionally and on certain important assumptions in falling unit costs and therefore prices, as demand, the market and production expand.The functional efficiency of computer hardware has long been an established fact and it is improving all the time, while software is still in a state of development in many respects, but the potential is enormous.I would like to introduce a personal note at this point... as a practising translator, I have spent many hours delving into dictionaries and reference books. In this activity, as distinct from exploiting other reference sources, I have had to use such wit and guile as I possess in order to find the correct equivalent terms in the target language. I say "wit and guile" because, until fairly recently, foreign-language dictionaries in book form have been far from efficient tools for the translator. Although their imperfections have made linguistic problem-solving by research and exploration a challenging and pleasurable intellectual activity for an inquiring mind which finds interest in most things, the manual method would seem to be inefficient in time and cost. Nevertheless, I am tempted to suggest -heretically, I fancy -that this is not yet wholly proven for all situations and all translators.At this late stage of the conference I think it has emerged clearly, from explicit statements or less explicit references to tight budgets, that during an economic recession, cost efficiency is a crucial factor, both on the supply side and for demand, in the development of the market, particularly for terminological data banks.What we have heard in this session on the production of the Glossary of European Accounting Charts certainly points to the continuing value, to the translator, of carefully-compiled sources in printed-book form -although the necessity to update terminology, emphasised by the speaker, leaves one in no doubt about the role of the computer here, too.The information provided on the LEXIS system underscores the speed and economy of computerisation within specialised fields.Speaking as a translator, it would grieve me to think that we translators are standing in the way of progress, if only as a relatively small (but important) group of potential users, in not making full use of the new aids. A point to remember, however, is that translators generally are working on a low budget, and will tend not to incur heavy expenditure on new equipment when the supply of work is reduced in the slack market and computerisation could reduce the supply further -at least in the short run.As a first step, it appears that the initiative must come from potential users to make their individual and collective needs known to the producers of hardware and software -and at this Conference several speakers on the supply side have already called for response from potential users.The professional bodies of translators are planning meetings to this end and more will certainly be heard from them in this regard, following the Conference. | Main paper:
:
The market has already been in existence for some years, but its growth is subject to the commercial principles of cost efficiency, to producers for investment in the production of hardware and software, and to consumers for expenditure on these products. At a time of deep recession and high interest rates, the principle is particularly applicable.Put simply, there must be an adequate market for the producer, and the consumer or user of the product must be convinced of the worth to himself of what he is buying, measured against his own experience of existing manual systems. To a degree, these factors are mutually dependent, this dependence is reflected conventionally and on certain important assumptions in falling unit costs and therefore prices, as demand, the market and production expand.The functional efficiency of computer hardware has long been an established fact and it is improving all the time, while software is still in a state of development in many respects, but the potential is enormous.I would like to introduce a personal note at this point... as a practising translator, I have spent many hours delving into dictionaries and reference books. In this activity, as distinct from exploiting other reference sources, I have had to use such wit and guile as I possess in order to find the correct equivalent terms in the target language. I say "wit and guile" because, until fairly recently, foreign-language dictionaries in book form have been far from efficient tools for the translator. Although their imperfections have made linguistic problem-solving by research and exploration a challenging and pleasurable intellectual activity for an inquiring mind which finds interest in most things, the manual method would seem to be inefficient in time and cost. Nevertheless, I am tempted to suggest -heretically, I fancy -that this is not yet wholly proven for all situations and all translators.At this late stage of the conference I think it has emerged clearly, from explicit statements or less explicit references to tight budgets, that during an economic recession, cost efficiency is a crucial factor, both on the supply side and for demand, in the development of the market, particularly for terminological data banks.What we have heard in this session on the production of the Glossary of European Accounting Charts certainly points to the continuing value, to the translator, of carefully-compiled sources in printed-book form -although the necessity to update terminology, emphasised by the speaker, leaves one in no doubt about the role of the computer here, too.The information provided on the LEXIS system underscores the speed and economy of computerisation within specialised fields.Speaking as a translator, it would grieve me to think that we translators are standing in the way of progress, if only as a relatively small (but important) group of potential users, in not making full use of the new aids. A point to remember, however, is that translators generally are working on a low budget, and will tend not to incur heavy expenditure on new equipment when the supply of work is reduced in the slack market and computerisation could reduce the supply further -at least in the short run.As a first step, it appears that the initiative must come from potential users to make their individual and collective needs known to the producers of hardware and software -and at this Conference several speakers on the supply side have already called for response from potential users.The professional bodies of translators are planning meetings to this end and more will certainly be heard from them in this regard, following the Conference.
Appendix:
| null | null | null | null | {
"paperhash": [],
"title": [],
"abstract": [],
"authors": [],
"arxiv_id": [],
"s2_corpus_id": [],
"intents": [],
"isInfluential": []
} | null | 507 | 0 | null | null | null | null | null | null | null | null |
e50381570dc76757c26796b2e9e089782a135dfa | 237295797 | null | Aspects of term bank operation | Following a period of basic development we now face the responsibility of creating second-generation term banks for tomorrow's world. The utmost attention should be given to the needs of future users when planning these banks. These needs can be summed up in the keywords simplicity, quality and service. A description is given of the ways and means a term bank administrator can use when meeting these needs. In closing, some of the economic aspects are discussed in connection with term bank operations. | {
"name": [
"{\\AA}str{\\\"o}m, Kjell"
],
"affiliation": [
null
]
} | null | null | Proceedings of Translating and the Computer: Term banks for tomorrow{'}s world | 1982-11-01 | 0 | 1 | null | Term Banks are a recent innovation. The technology required is in principle a product of the 1970s. For the most part it has been a pioneer undertaking, uniting linguistic and computer skills to create an entirely new field of operation.Work of a pioneering nature usually displays certain basic characteristics. A handful of enthusiasts, devoted to their ideas, energetically seek to put these ideas into effect in an atmosphere where only a few people are capable of discerning the benefits the new ideas would ensure. The first results are characterised by improvisations and inadequacies. Funds are insufficient to permit market promotion, or to conduct follow-up studies among customers in order to improve the product.Now, a few years into the decade of the 1980s, it can with confidence be asserted that term banks are assured a place in the modern world. Some of the banks have grown to be surprisingly large and fulfil important functions within such institutions as the Commission of the European Communities.New, expanding user groups have become interested in the services a term banks can provide. Valuable experience in data base handling and information retrieval in general is being rapidly acquired. Therefore, there is a considerable body of material to analyse if we are to begin planning for a second generation of term banks, one for tomorrow's world.The focus of our efforts in future planning should be the user and his needs. All too often we have allowed the computer's possibilities and limitations to determine the nature of the system we use. We devised specific solutions mainly because it was possible to do so in that way and not necessarily because a user was interested in that particular solution.The objectives of an administrator of a term bank can of course vary considerably. There is nothing wrong with creating a small term bank that has limited functions for the use of a small group of users whose needs are well known in advance. But naturally it is tempting to think big when, after all, something so difficult to foresee as the future is the object of one's attention. Therefore, I intend to concentrate upon the idea of a large, general term bank to serve an entire nation. Such a bank would satisfy the needs of users with a variety of tasks, of prior knowledge, of organisational adherence, or of requirements for a specific product.I have until now used the expression "term bank". But because the subject is somewhat futuristic I shall instead use an interesting neologism that I believe originated with Professor Sager of UMIST, namely "linguistic data bank", or LDB. A linguistic data bank is: "a collection, stored in a computer, of special language vocabularies, including nomenclatures, together with the information required for their multilingual dictionary for direct consultation, as a basis for dictionary production, as a control instrument for consistency of usage and term creation and as an ancillary tool in information and documentation". Data terminals are few and far between in Sweden. For most people here computers are still mysterious and complicated phenomena -more of a threat than a help.The number of people who have mastered online information retrieval methods is limited, perhaps a thousand in all. Of these 75% are employed as IR specialists and intermediaries.There are many different IR systems in use today, but it is hardly possible to master more than three search languages.The end user seldom uses online IR methods.Instead traditional, cost-free methods are preferred if one makes the search oneself. The idea that information should cost something to obtain is an idea difficult to accept.Soon computers and the use of computers will be part of everyday routine. Many school children today in Sweden are being given a good basic education in data processing and the first generation that has been exposed to this education has already entered active vocations.The technical pre-conditions for utilising computer services will be improved. Terminals will become common features of places of work, post offices, local computing centres and homes. A variety of services will be compiled and made easily available via one or more computer networks.What will be the needs of linguistic data bank users in the future? These can of course vary to a large extent, but I believe that the ones we should pay attention to are the simple, down-to-earth requests, which can be summed up under the following keywords: simplicity, quality and service.The LDB must be simple to use, otherwise its use will be restricted to a small group of enthusiasts. LDB products and services must be of high quality, that is, they must suit their purpose. The LDB administrator must always be prepared to provide tailor-made services in response to the wishes of his clients.The "simplicity" aspect has a very special dimension, it can namely refer to a specific LDB or a group of LDBs. It is apparent that many LDB customers in small countries like Sweden would prefer to have access to the LDBs of other countries as well. If the basic functions of all the different LDBs were well coordinated such an international exchange would be greatly facilitated.In order to be easy to use an LDB should have the following characteristics:Administrative routines that control access to an LDB (subscription, billing etc) should be uncomplicated. It should be easy to log on.Availability should be of a high degree, that is, there should be a minimum risk that the LDB will not work properly due to some technical reason.The search language should be standardised. The LDB manual should preferably have a standardised layout. The description of the search language and the LDB should be written at different levels of complexity so that each user, regardless of prior knowledge, would be able to find a suitable text-type.The internal data structure and default output formats should, if possible, agree with other linguistic data banks. It should be easy to create a format of one's own. Questions and directives from a search session should be easily stored to permit automatic execution at another time or in another LDB.There should be automatic functional aids for both the IR system and the LDB itself. The IR system should, by responding to specific directives, inform about search language and other data-technical aspects. Error messages should be easy to interpret. The LDB should contain meta-information on content and application etc., which can be produced on the terminal via special search sequences.The linguistic information contained in the LDB must be correct in some, well-defined respect. It must be evident under which circumstances a piece of information is correct.The user must be able to determine which degree of completeness the LDB has, both horizontally (which are the subject fields treated?) and vertically (at which level of abstraction is a subject field handled?). In certain cases it may be necessary to declare what the LDB does not contain or which services it cannot supply.A user should quickly be able to contact qualified personnel to clear up questions of a data-technical nature (for example how to obtain a certain type of result from the IR system) or in questions of content (for example how one should interpret the information resulting from a specific search session).In addition to the fundamental supply of services and products the LDB administrator should be able to create specially-made computer routines for those clients who request it.Efficient forms of cooperation should exist between the LDB organisation and its clients where the views of the users on computer techniques and content are taken into account and contribute to the development of the linguistic data bank.What can I as an LDB administrator do to meet the needs I believe future users will have? Let us outline a main frame:It is likely that large search service organisations will be the most efficient suppliers of IR services in the future. I should therefore agree to have a search service in operating my LDB. The search service could take care of the practical matters of subscription and so forth in a professional way. At the same time the search service could offer my customers access to other data bases, both LDBs and other types, made accessible through the international data networks of which the service would be a member.The search services would maintain a reliable computer centre with several information retrieval systems. All would have a standardised interface between data base and user, while internal functions would be different. I could therefore select an IR system to suit my own needs for special routines, while the user would see practically no difference between any of the systems.should be able to come to an agreement, through international LDB cooperative efforts, on matters pertaining to data techniques and language theory. I would apply these agreements in such a way that to the user the LDB would appear to be organised in the same manner as other LDBs. This I will have achieved with the help of certain internal structuring of data or front-end application programmes. I would use the search service organisation mainly for the operation of the LDB. Possibly I might permit the search service to conduct training and marketing as well. A separate group of linguists and computer specialists would be necessary for the other functions, those that ensure that the LDB is furnished with a useful content.Within the LDB organisation we would continuously cooperate with institutions involved with language and its use (such as terminology centres, standardisation centres, language planning institutions, dictionary publishers etc). The LDB customers would be offered some form of continuous cooperation, for example in user groups. If needed, we could hire subject field specialists as consultants for specific studies on some particular problem of language use.In the agreement between the LDB organisation and the search service it would be clearly defined who is responsible for any particular service. In those cases where the responsibility would be mine I would see to it that my organisation would have all the necessary competence to solve the tasks that are likely to occur. Customers would not have to keep track of the division of responsibility. All contacts with customers would be handled by a customer service section within the LDB organisation. Customer service would convey all businesses to the proper destination within the LDB organisation or, depending upon the nature of the matter at hand, to the search service organisation.What chance does a linguistic data bank have as a product in a traditional market governed by competition, supply and demand? If there were but one LDB in the market there would be little competition. Many of the products and services available from an LDB would not be obtainable from any other source.But even a unique product must be in demand before it can be sold. The field of information supply has always been noted for the traditional belief that it is a cost-free resource. Even if this attitude is slowly changing, and a readiness to pay for information is slowly making itself felt, we must nevertheless devote a great deal of attention to the task of rendering linguistic data banks more attractive to potential customers.The basic rule of course is that an LDB must be useful. The user's utility has its source in the content of the linguistic data bank: the right kind of information, of a satisfactory quality, must be there. The utility principle must be used for LDB handling in general. Products and services must be built up around the central information content in such a thoroughly thought-out way that all stages of the linguistic information supply fit together.And finally, in an open market we must expect a user to pay for LDB service only if: -his total costs are reduced his income increases the costs of LDB services can be seen as an investment with future returns.It is very difficult to determine to what extent in the future a fully-fledged LDB can be financed entirely by fees for its use or for the specific tasks it performs. This will for the most part be a result of the market situation for data bases in general. The best situation would be to allow a free and effective price mechanism to work with minimum regulation. The user's willingness to pay would therefore provide valuable information to whoever is responsible for the organisation and development of an LDB.The problem arises if there are not enough LDB users with the economic ability to utilise the bank at market prices. It is possible that an LDB would be a useful aid for teachers and students in their daily instruction; for the public at large in their attempt to keep abreast of technical development and participate in the democratic processes in society, and so on. But these types of users would not be able to pay what it would cost, therefore some form of subsidy would be necessary.If linguistic data banks have national coverage and fulfil an important need in any particular country's linguistic heritage and development it stands to reason that operations should in part be financed as a part of a cultural programme sponsored by the nation at large. | null | null | null | null | Main paper:
the origins of term banks:
Term Banks are a recent innovation. The technology required is in principle a product of the 1970s. For the most part it has been a pioneer undertaking, uniting linguistic and computer skills to create an entirely new field of operation.Work of a pioneering nature usually displays certain basic characteristics. A handful of enthusiasts, devoted to their ideas, energetically seek to put these ideas into effect in an atmosphere where only a few people are capable of discerning the benefits the new ideas would ensure. The first results are characterised by improvisations and inadequacies. Funds are insufficient to permit market promotion, or to conduct follow-up studies among customers in order to improve the product.Now, a few years into the decade of the 1980s, it can with confidence be asserted that term banks are assured a place in the modern world. Some of the banks have grown to be surprisingly large and fulfil important functions within such institutions as the Commission of the European Communities.New, expanding user groups have become interested in the services a term banks can provide. Valuable experience in data base handling and information retrieval in general is being rapidly acquired. Therefore, there is a considerable body of material to analyse if we are to begin planning for a second generation of term banks, one for tomorrow's world.The focus of our efforts in future planning should be the user and his needs. All too often we have allowed the computer's possibilities and limitations to determine the nature of the system we use. We devised specific solutions mainly because it was possible to do so in that way and not necessarily because a user was interested in that particular solution.The objectives of an administrator of a term bank can of course vary considerably. There is nothing wrong with creating a small term bank that has limited functions for the use of a small group of users whose needs are well known in advance. But naturally it is tempting to think big when, after all, something so difficult to foresee as the future is the object of one's attention. Therefore, I intend to concentrate upon the idea of a large, general term bank to serve an entire nation. Such a bank would satisfy the needs of users with a variety of tasks, of prior knowledge, of organisational adherence, or of requirements for a specific product.I have until now used the expression "term bank". But because the subject is somewhat futuristic I shall instead use an interesting neologism that I believe originated with Professor Sager of UMIST, namely "linguistic data bank", or LDB. A linguistic data bank is: "a collection, stored in a computer, of special language vocabularies, including nomenclatures, together with the information required for their multilingual dictionary for direct consultation, as a basis for dictionary production, as a control instrument for consistency of usage and term creation and as an ancillary tool in information and documentation". Data terminals are few and far between in Sweden. For most people here computers are still mysterious and complicated phenomena -more of a threat than a help.The number of people who have mastered online information retrieval methods is limited, perhaps a thousand in all. Of these 75% are employed as IR specialists and intermediaries.There are many different IR systems in use today, but it is hardly possible to master more than three search languages.The end user seldom uses online IR methods.Instead traditional, cost-free methods are preferred if one makes the search oneself. The idea that information should cost something to obtain is an idea difficult to accept.Soon computers and the use of computers will be part of everyday routine. Many school children today in Sweden are being given a good basic education in data processing and the first generation that has been exposed to this education has already entered active vocations.The technical pre-conditions for utilising computer services will be improved. Terminals will become common features of places of work, post offices, local computing centres and homes. A variety of services will be compiled and made easily available via one or more computer networks.What will be the needs of linguistic data bank users in the future? These can of course vary to a large extent, but I believe that the ones we should pay attention to are the simple, down-to-earth requests, which can be summed up under the following keywords: simplicity, quality and service.The LDB must be simple to use, otherwise its use will be restricted to a small group of enthusiasts. LDB products and services must be of high quality, that is, they must suit their purpose. The LDB administrator must always be prepared to provide tailor-made services in response to the wishes of his clients.The "simplicity" aspect has a very special dimension, it can namely refer to a specific LDB or a group of LDBs. It is apparent that many LDB customers in small countries like Sweden would prefer to have access to the LDBs of other countries as well. If the basic functions of all the different LDBs were well coordinated such an international exchange would be greatly facilitated.In order to be easy to use an LDB should have the following characteristics:Administrative routines that control access to an LDB (subscription, billing etc) should be uncomplicated. It should be easy to log on.Availability should be of a high degree, that is, there should be a minimum risk that the LDB will not work properly due to some technical reason.The search language should be standardised. The LDB manual should preferably have a standardised layout. The description of the search language and the LDB should be written at different levels of complexity so that each user, regardless of prior knowledge, would be able to find a suitable text-type.The internal data structure and default output formats should, if possible, agree with other linguistic data banks. It should be easy to create a format of one's own. Questions and directives from a search session should be easily stored to permit automatic execution at another time or in another LDB.There should be automatic functional aids for both the IR system and the LDB itself. The IR system should, by responding to specific directives, inform about search language and other data-technical aspects. Error messages should be easy to interpret. The LDB should contain meta-information on content and application etc., which can be produced on the terminal via special search sequences.The linguistic information contained in the LDB must be correct in some, well-defined respect. It must be evident under which circumstances a piece of information is correct.The user must be able to determine which degree of completeness the LDB has, both horizontally (which are the subject fields treated?) and vertically (at which level of abstraction is a subject field handled?). In certain cases it may be necessary to declare what the LDB does not contain or which services it cannot supply.A user should quickly be able to contact qualified personnel to clear up questions of a data-technical nature (for example how to obtain a certain type of result from the IR system) or in questions of content (for example how one should interpret the information resulting from a specific search session).In addition to the fundamental supply of services and products the LDB administrator should be able to create specially-made computer routines for those clients who request it.Efficient forms of cooperation should exist between the LDB organisation and its clients where the views of the users on computer techniques and content are taken into account and contribute to the development of the linguistic data bank.What can I as an LDB administrator do to meet the needs I believe future users will have? Let us outline a main frame:It is likely that large search service organisations will be the most efficient suppliers of IR services in the future. I should therefore agree to have a search service in operating my LDB. The search service could take care of the practical matters of subscription and so forth in a professional way. At the same time the search service could offer my customers access to other data bases, both LDBs and other types, made accessible through the international data networks of which the service would be a member.The search services would maintain a reliable computer centre with several information retrieval systems. All would have a standardised interface between data base and user, while internal functions would be different. I could therefore select an IR system to suit my own needs for special routines, while the user would see practically no difference between any of the systems.should be able to come to an agreement, through international LDB cooperative efforts, on matters pertaining to data techniques and language theory. I would apply these agreements in such a way that to the user the LDB would appear to be organised in the same manner as other LDBs. This I will have achieved with the help of certain internal structuring of data or front-end application programmes. I would use the search service organisation mainly for the operation of the LDB. Possibly I might permit the search service to conduct training and marketing as well. A separate group of linguists and computer specialists would be necessary for the other functions, those that ensure that the LDB is furnished with a useful content.Within the LDB organisation we would continuously cooperate with institutions involved with language and its use (such as terminology centres, standardisation centres, language planning institutions, dictionary publishers etc). The LDB customers would be offered some form of continuous cooperation, for example in user groups. If needed, we could hire subject field specialists as consultants for specific studies on some particular problem of language use.In the agreement between the LDB organisation and the search service it would be clearly defined who is responsible for any particular service. In those cases where the responsibility would be mine I would see to it that my organisation would have all the necessary competence to solve the tasks that are likely to occur. Customers would not have to keep track of the division of responsibility. All contacts with customers would be handled by a customer service section within the LDB organisation. Customer service would convey all businesses to the proper destination within the LDB organisation or, depending upon the nature of the matter at hand, to the search service organisation.What chance does a linguistic data bank have as a product in a traditional market governed by competition, supply and demand? If there were but one LDB in the market there would be little competition. Many of the products and services available from an LDB would not be obtainable from any other source.But even a unique product must be in demand before it can be sold. The field of information supply has always been noted for the traditional belief that it is a cost-free resource. Even if this attitude is slowly changing, and a readiness to pay for information is slowly making itself felt, we must nevertheless devote a great deal of attention to the task of rendering linguistic data banks more attractive to potential customers.The basic rule of course is that an LDB must be useful. The user's utility has its source in the content of the linguistic data bank: the right kind of information, of a satisfactory quality, must be there. The utility principle must be used for LDB handling in general. Products and services must be built up around the central information content in such a thoroughly thought-out way that all stages of the linguistic information supply fit together.And finally, in an open market we must expect a user to pay for LDB service only if: -his total costs are reduced his income increases the costs of LDB services can be seen as an investment with future returns.It is very difficult to determine to what extent in the future a fully-fledged LDB can be financed entirely by fees for its use or for the specific tasks it performs. This will for the most part be a result of the market situation for data bases in general. The best situation would be to allow a free and effective price mechanism to work with minimum regulation. The user's willingness to pay would therefore provide valuable information to whoever is responsible for the organisation and development of an LDB.The problem arises if there are not enough LDB users with the economic ability to utilise the bank at market prices. It is possible that an LDB would be a useful aid for teachers and students in their daily instruction; for the public at large in their attempt to keep abreast of technical development and participate in the democratic processes in society, and so on. But these types of users would not be able to pay what it would cost, therefore some form of subsidy would be necessary.If linguistic data banks have national coverage and fulfil an important need in any particular country's linguistic heritage and development it stands to reason that operations should in part be financed as a part of a cultural programme sponsored by the nation at large.
Appendix:
| null | null | null | null | {
"paperhash": [],
"title": [],
"abstract": [],
"authors": [],
"arxiv_id": [],
"s2_corpus_id": [],
"intents": [],
"isInfluential": []
} | null | 507 | 0.001972 | null | null | null | null | null | null | null | null |
5981435032d158d0c7542e1176c50243c05af39e | 237295794 | null | Words in the air | I am going to try and describe to you the use of standardised language in aviation and how important we consider it to be. | {
"name": [
"Dancer, John"
],
"affiliation": [
null
]
} | null | null | Proceedings of Translating and the Computer: Term banks for tomorrow{'}s world | 1982-11-01 | 0 | 1 | null | The business of international civil aviation is relatively young and really began in earnest after the second world war. In order to achieve standardisation throughout the world of civil aviation a body called the International Civil Aviation Organisation (ICAO) was set up by the United Nations in 1947. Today ICAO has over 140 member states; these states confer over all aspects of civil aviation, one of which is radiotelephony phraseology. At this moment the latest revision to radiotelephony phraseology is passing through the final stages at ICAO HQ Montreal for implementation around the world next year.Initially there was very little contact between ground stations and aircraft and such contact as did take place was in morse code or wireless telegraphy (W/T) and high frequency (HF) radio which was not necessarily different. As the aviation business grew so did the need to communicate and eventually a system was developed in civil aviation using mainly very high frequency (VHF). During this build-up to the modern day it became obvious that there was a need for standardisation to eliminate confusion. One of the earliest problems was that the numbers 5 and 9 sound very similar over the radio telephone (RTF) and so they are pronounced fife and niner. The language used in civil aviation is primarily English, with French, Spanish and Russian as the other official languages. It would obviously be ideal if there were only one official language but national pride and tradition get in the way.To expand a little on how these languages are used: each nation may speak its own language but must be able to communicate with international air traffic in English, or one of the three other official languages. Again I stress that it would be far simpler to have one language for aviation -this in itself might reduce the chance of confusion.To give you an idea of how a pilot and controller speak and sound to each other I am going to play you a short tape recoding of Heathrow ground movement control (GMC).You may or may not have understood all of what was happening on that tape because of the technical terms used, but the language was obvious and clear. We had examples of various nationalities communicating with each other in a standard manner. The necessity for everyone to describe or call a procedure or piece of equipment by the same name is equally important and failure to do so has contributed to some fatal accidents. Another problem is pronunciation of words and attention has been paid to this side of the language used. A frivolous example of mispronunciation accompanied by a heavy accent may help to describe this better. "A gentleman on holiday in England with some friends was asked whether there was anything he wanted to see and he replied yes, I would like to see a tat-ched-a-cottedger; this confused the gentleman's friends who had absolutely no idea what a tat-ched-a-cottedger was. The problem remained unresolved until one day when they went out for lunch to a pub in the country. The gentleman whilst admiring the countryside saw a tat-ched-a-cottedger and pointed it out to his friends who were amazed to see a thatched cottage. You may now appreciate the need for uniform pronunciation. ICAP produce documents for world-wide distribution which display the words, their meanings, pronunciation and examples of their use.I will play you another short extract of a normal day at the London Air Traffic Control Centre followed by a talk down approach to a military airfield. The controller is one of approximately 30 radar controllers working at any one time and he is working the North East corner of the London Area from 2000 feet to 13000 feet within which one of the four Heathrow stacks is situated.You may still be confused by some of the technical terms used but I think that these examples show the need for uniformity of language to avoid the possibility of ambiguities and so confusion. Confusion in aviation can be fatal.conclude, I would like to remind you that many different airlines operate in and out of London's Heathrow Airport. When you add to these airlines the ones which ply their trade out of London's other airports and those who overfly the UK en route to transatlantic or continental destinations, you can imagine that there is a considerable number of aircraft in the air at any one time. The absolute necessity for instructions to be understood and complied with requires the use of standard phraseology and clear pronunciation. Now that you have had a brief insight into the world of air traffic control and aviation I hope you can understand why our language is still developing and that new techniques have to be studied, and if necessary, words invented to suit all the people who are likely to use it. | null | null | null | null | Main paper:
:
The business of international civil aviation is relatively young and really began in earnest after the second world war. In order to achieve standardisation throughout the world of civil aviation a body called the International Civil Aviation Organisation (ICAO) was set up by the United Nations in 1947. Today ICAO has over 140 member states; these states confer over all aspects of civil aviation, one of which is radiotelephony phraseology. At this moment the latest revision to radiotelephony phraseology is passing through the final stages at ICAO HQ Montreal for implementation around the world next year.Initially there was very little contact between ground stations and aircraft and such contact as did take place was in morse code or wireless telegraphy (W/T) and high frequency (HF) radio which was not necessarily different. As the aviation business grew so did the need to communicate and eventually a system was developed in civil aviation using mainly very high frequency (VHF). During this build-up to the modern day it became obvious that there was a need for standardisation to eliminate confusion. One of the earliest problems was that the numbers 5 and 9 sound very similar over the radio telephone (RTF) and so they are pronounced fife and niner. The language used in civil aviation is primarily English, with French, Spanish and Russian as the other official languages. It would obviously be ideal if there were only one official language but national pride and tradition get in the way.To expand a little on how these languages are used: each nation may speak its own language but must be able to communicate with international air traffic in English, or one of the three other official languages. Again I stress that it would be far simpler to have one language for aviation -this in itself might reduce the chance of confusion.To give you an idea of how a pilot and controller speak and sound to each other I am going to play you a short tape recoding of Heathrow ground movement control (GMC).You may or may not have understood all of what was happening on that tape because of the technical terms used, but the language was obvious and clear. We had examples of various nationalities communicating with each other in a standard manner. The necessity for everyone to describe or call a procedure or piece of equipment by the same name is equally important and failure to do so has contributed to some fatal accidents. Another problem is pronunciation of words and attention has been paid to this side of the language used. A frivolous example of mispronunciation accompanied by a heavy accent may help to describe this better. "A gentleman on holiday in England with some friends was asked whether there was anything he wanted to see and he replied yes, I would like to see a tat-ched-a-cottedger; this confused the gentleman's friends who had absolutely no idea what a tat-ched-a-cottedger was. The problem remained unresolved until one day when they went out for lunch to a pub in the country. The gentleman whilst admiring the countryside saw a tat-ched-a-cottedger and pointed it out to his friends who were amazed to see a thatched cottage. You may now appreciate the need for uniform pronunciation. ICAP produce documents for world-wide distribution which display the words, their meanings, pronunciation and examples of their use.I will play you another short extract of a normal day at the London Air Traffic Control Centre followed by a talk down approach to a military airfield. The controller is one of approximately 30 radar controllers working at any one time and he is working the North East corner of the London Area from 2000 feet to 13000 feet within which one of the four Heathrow stacks is situated.You may still be confused by some of the technical terms used but I think that these examples show the need for uniformity of language to avoid the possibility of ambiguities and so confusion. Confusion in aviation can be fatal.conclude, I would like to remind you that many different airlines operate in and out of London's Heathrow Airport. When you add to these airlines the ones which ply their trade out of London's other airports and those who overfly the UK en route to transatlantic or continental destinations, you can imagine that there is a considerable number of aircraft in the air at any one time. The absolute necessity for instructions to be understood and complied with requires the use of standard phraseology and clear pronunciation. Now that you have had a brief insight into the world of air traffic control and aviation I hope you can understand why our language is still developing and that new techniques have to be studied, and if necessary, words invented to suit all the people who are likely to use it.
Appendix:
| null | null | null | null | {
"paperhash": [],
"title": [],
"abstract": [],
"authors": [],
"arxiv_id": [],
"s2_corpus_id": [],
"intents": [],
"isInfluential": []
} | null | 507 | 0.001972 | null | null | null | null | null | null | null | null |
4bc7b5c5d64a00918b292c2e5d5bb2050ac0c46a | 237295780 | null | A glossary in print: the problems and rewards of producing your own glossary for sale | For those of you who have come to hear about computers -and I realise that applies to most of you -now's the time to take that post-prandial nap! And if you have come to hear about computers as such, instead of terminology as such, then my talk is not for you anyway. Now computers are fine in the right circumstances and I do not want to give the impression that I'm against them. In fact we investigated the possibility of using them for production of our Glossary (updating, indexing, etc.), and the layout of the glossary is specifically designed to facilitate transfer of the contents of the Glossary to computer (subject, of course, to copyright). We can all benefit from computerisation, as we are seeing at this conference. However, I think it is important to remember that computers are no more than a tool and are not always justified by the circumstances or the cost. Many of us cannot afford them, and it would be a pity if our expertise were wasted for that reason. I think it would therefore be a mistake to assume that term banks (in the general sense) in tomorrow's world will all be prepared by computer. | {
"name": [
"Percival, Christopher T."
],
"affiliation": [
null
]
} | null | null | Proceedings of Translating and the Computer: Term banks for tomorrow{'}s world | 1982-11-01 | 0 | 0 | null | For those of you who have come to hear about computers -and I realise that applies to most of you -now's the time to take that post-prandial nap! And if you have come to hear about computers as such, instead of terminology as such, then my talk is not for you anyway.Now computers are fine in the right circumstances and I do not want to give the impression that I'm against them. In fact we investigated the possibility of using them for production of our Glossary (updating, indexing, etc.) , and the layout of the glossary is specifically designed to facilitate transfer of the contents of the Glossary to computer (subject, of course, to copyright). We can all benefit from computerisation, as we are seeing at this conference. However, I think it is important to remember that computers are no more than a tool and are not always justified by the circumstances or the cost. Many of us cannot afford them, and it would be a pity if our expertise were wasted for that reason. I think it would therefore be a mistake to assume that term banks (in the general sense) in tomorrow's world will all be prepared by computer.My message will therefore be to encourage those individuals among you who may have thought about publishing your own dictionary or glossary of whatever, and may feel intimidated by all this talk of computers by the large organisations represented here. My project would be within the resources of every single one of you, either to produce or, for that matter, to purchase.In this particular session, you are also expecting to hear something about marketing. No doubt you will all have received some of those YES and NO envelopes in the post from time to time, enclosing messages like "Open this envelope for news of your Bonus Award!" or "Check inside for your free gift!", or a prize draw certificate promising you all sorts of fabulous winnings. How nice it would be to have volume sales to justify that sort of marketing expenditure! But I'm not going to talk about that either. What I am going to talk about is a very personal project which has a very practical application. So I make no apologies if my words are flavoured by my own personal feelings and reactions.As we are all so very pressed for time, I could in fact condense my talk into one sentence and then sit down -so please listen carefully to my next statement: "The problems of producing your own glossary are partly financial but mainly procedural (contents, formal, printing, marketing): the rewards are entirely personal and seldom financial." But I haven't come all this way just to tell you that, which does not tell you very much anyway. So I will expand on that sentence as far as time permits.I am an accountant. Not certified (although sometimes I think I should be), but chartered -which probably sounds at least as bad to those of our guests from abroad who do not know the exact meaning of those terms. The work of accountants is not totally unlike that of translators. I remember when I started my accountancy training many years ago, I asked a fellow trainee what it was like to be an accountant, to which he replied that it was like being a mushroom: they shut you away in a dark cupboard, and from time to time someone opens the door and throws in a load of manure! The glossary which I am going to talk about is therefore an accountancy glossary. (See Fig 1: Title page: Glossary of European Accounting Charts -Volume I). I do not propose to describe it in detail here. I must apologise if I lapse into accountancy jargon from time to time, but this is a specialist glossary and in any case the principles are the same for any specialist subject. Apart from this, accountancy impinges on every commercial company and therefore on any technical subject because, whatever its activity, every company has to keep accounts. Again, I am sure you all know the ordinary accountancy expressions, such as "reconciling the bank" (which means apologising for your overdraft) or "contingent liabilities" (for example a pregnant wife!).Our glossary is not a conventional dictionary. (Fig 2: Contents page for German section). It is a translation into English of lists of account headings and balance sheet classifications recommended in the countries concerned, and at this stage it gives UK English only, although an American supplement is planned in the near future. As you see in Fig 3: full contents page, it contains three sections -French, German and Spanish -and each section comprises an introduction (with some notes on historical background in the country concerned, etc.), the foreign headings listed in contextual order with the appropriate English translation next to each heading, and an alphabetical index in the original language concerned.There are four main steps to be taken when compiling the contents of such a glossary:1. Deciding what to include (or leave out). This was no problem in our case, because the accounting plans are standard in the countries concerned.2. Interpreting the correct meaning of the original. This is obvious and requires no further explanation here.3. Deciding on the appropriate translation for each term/heading. This is the crucial step, and in our case was even more difficult than usual because we have only given one translation in each case (I shall comment on this again when I refer to context in a moment). In his opening address to the conference, Brian Roden said that you have to work in committee; I agree. Obviously you can work alone, but in my experience an individual tends to get too close to his own preferred terminology and it is essential that such preferences be 'bounced' against the opinions of others. Each of the sections in our Glossary was prepared initially in draft by one member of our 'committee of three', and then every single term was discussed in committee until the most acceptable alternative was agreed.The type of alternative translation chosen can be one of several. Before giving a brief outline of these, it is worth reminding ourselves what the purpose of translation is. Briefly, it is to "enable the reader to understand the meaning of the original text in the context of that original text". With this mind, the translation may be either: If no translation can be found in any of the above three categories, there are two further types of alternatives which may be considered: d) A literal translation (producing a phrase which does not exist in English), in order to make it clear that the heading is unique or has a meaning which is peculiar to the country concerned. Some headings, for example, refer specifically to national legislation, and this fact should be indicated in the translation. e) An explanation, in order to give the heading its proper meaning and avoid misunderstanding. As you see from Fig 4, the French Plan subdivides "called up capital" between capital "non amorti" and capital "amorti". If we look at the phrase "non amorti", a non-accountant translator would probably translate this literally as 'unamortised capital' and be quite content with that. Unfortunately we do not talk about unamortised capital in the UK, so that such a translation would have no meaning to an English accountant. Your accountancy translator with no knowledge of the precise functioning of the French Plan would probably say to himself "Aha!" and translate the heading as 'unredeemed capital'. This, however, would be even more misleading, because unredeemed capital in the UK has a different application from that in the French Plan. In the UK, unredeemed capital is simply capital which has not been redeemed, and the phrase would be understood as such by an English accountant. In France, however, this heading is only used if some of the capital has been redeemed, and this heading is used for that part of the capital which remains after part has been redeemed. A rather cumbersome translation is therefore necessary: 'Balance of share capital remaining where part has been redeemed', but the English accountant reader knows exactly what is intended.a) Straightmade sure that the translation chosen cannot be misunderstood. This is very important, and I will give only one example. ( Fig 5) The 'reserve' accounts in all three sections of the Glossary include a heading which is nearly always translated into English as 'legal reserves'. This alternative is chosen in accordance with 3(a) above, on the assumption that a straight translation (because it is a straight translation in each case) will mean the same. I have never been happy with this translation, because there is no such heading in an English balance sheet and it could be misunderstood by the English reader to mean "lawful" reserves, i.e. reserves which are permissible (tax-allowable reserves, for example) but optional. In the case of the German heading however (gesetzliche Rücklage) the reserves are compulsory, i.e. statutory. The word 'statutory', however, itself presents some difficulty in the French and Spanish plans, because it could be confused with another heading (and another meaning) which we have translated as 'reserves provided for by the company's statutes'! I hope this illustrates the problem.My last point about possible misunderstandings brings me back to the question of context. Context is all-important. The same word can mean two totally different things in consecutive lines of text, and one of the problems with conventional dictionaries is that the user has to select the appropriate meaning on the basis of his own knowledge. The unique layout used in our Glossary, where each balance sheet heading, for example, is subdivided on a decimal system with further subdivisions down to the level of detail required, means that every heading is shown in its precise context and that an accountant does not necessarily even need to refer to the index at the back of each section, but can go straight to the particular section of the balance sheet with which he is concerned. The index, for its part, gives only the relevant account numbers where each term can be found, so that the reader is compelled to look up the term in the correct context of its surrounding headings. Also, as only one translation is given, the reader can safely assume that the translation is acceptable in the context shown (although other alternatives may be equally acceptable), and that there is no chance of selecting a wrong translation. (see Fig 6 for sample page from Section Three : Spain).I would just like to point out one or two other features of the Glossary. It has a spiral binding, which enables it to lie flat on the desk when opened at a particular page (compared with a bound volume, which requires all sorts of acrobatics to keep it open at the desired page). The form of spiral binding chosen also enables future supplements or updates to be inserted and replaced pages removed without difficulty. The index pages at the end of each section are on coloured paper for ease of location. At one time we considered using coloured card, but found that it was then difficult to open the Glossary anywhere apart from at the index! Marketing. So far I have told you a lot about the product, because the product is important for the market. However, the most important thing is that there must be a market for your product. I understood one of the earlier speakers to say that he was involved in preparing glossaries for which the prospective user was not yet known. This may be OK for large organisations, but I can assure you that you must be sure of your market before you start work on any project of a size such as ours. It is very much easier to produce or adapt a product for an existing market, than vice versa.Choice of publisher is important. We did approach one large publisher, but his expensive equipment and other overheads were such that he required a much larger volume run than we envisaged, and he also required us to underwrite any loss on unsold copies. As we were therefore being asked to bear the risk of losses anyway, we also decided to maximise any profits and publish ourselves. Publishing oneself does have disadvantages, arising mainly from lack of time and advertising coverage. This inevitably involves a reduction in volume sales, but this can be more than recouped by the higher return per copy sold. There are no other problems connected with publishing: ISBN numbers, VAT, etc. are all quite simple.Printing. We have an arrangement with a local printer, based on a photocopying process, which enables us to order very small print runs at a time as required, thus keeping down our capital costs and the costs associates with stocks of unsold copies.Selling price. We originally hoped to keep the retail price below £10, but once it moved above this level we added in several extras, such as a glossy cover, coloured index pages, stronger packaging etc. and allowed the price to move up relatively sharply. This policy proved correct.Advertising. Mailing shots through the post are fairly successful in this country, not so successful in some others. Reviews in professional magazines are useful, if you can get them, coupled with advertising in appropriate journals.My customers have proved very loyal, and conferences such as this have proved very useful opportunities for publicity. An international accountants' congress was held recently in Mexico, where we managed to arrange for 2,000 copies of a publicity leaflet to be inserted in delegates' folders -essentially our target market all brought together in one place! Time is the main problem when marketing your own glossary.Rewards: so what are the rewards? They are less easy to define than the problems, but are in any case not financial. The time taken to process a single order (despatch, invoicing, payment, accounting etc.) accounts for a large part of any gross profit margin which may have been allowed. No, the rewards are entirely based on personal satisfaction. I needed such a glossary for my own work (and am finding it increasingly useful all the time); we identified a gap in the supply situation and feel that we have gone some way towards filling it; and we have produced a product which has been well received by the specialists at whom it was aimed. The day I left to travel to this conference there were 4 orders in the post, including 3 from Japan and 1 from Italy. Anyone who works on his own gets a tremendous kick from that sort of thing. I do not want to boast, but I sent an inspection copy of the Glossary to the largest firm of accountants in Europe, who thereupon ordered 8 copies initially for their German organisation; they have now come back with a further order for 50 copies of the German and 10 copies of the French sections for use by their various offices in those and other countries, which I regard as the supreme accolade.There are other rewards too, of course. I have gained some new translation clients from the publicity generated by the Glossary, so that there has been a mutual spin-off between my translation activities and sales of the Glossary. I have also added enormously to my own personal knowledge of the subject, partly from the intensive exchange of ideas and opinions with my two colleagues on the project and partly from our use of the most up-to-date reference works (including the EEC Fourth Directive and the UK 1981 Companies Act).Our future plans include publication of an American supplement in the near future and supplements and updates to this existing volume when required. We also intend to publish other volumes dealing with other countries, but this will only be possible at infrequent intervals owing to our other work commitments.I started this talk with a dig at computers. I should now like to retract any sting which may have seemed to be behind those remarks. Nonetheless, I insist that, in present economic circumstances and with improving communications, both of which mean that self-employment will continue to grow, there are circumstances (particularly of size) where traditional methods are still the best. And I can assure you that the personal rewards are far greater. | null | null | null | null | Main paper:
the last stage in compiling the contents is to stand back and:
made sure that the translation chosen cannot be misunderstood. This is very important, and I will give only one example. ( Fig 5) The 'reserve' accounts in all three sections of the Glossary include a heading which is nearly always translated into English as 'legal reserves'. This alternative is chosen in accordance with 3(a) above, on the assumption that a straight translation (because it is a straight translation in each case) will mean the same. I have never been happy with this translation, because there is no such heading in an English balance sheet and it could be misunderstood by the English reader to mean "lawful" reserves, i.e. reserves which are permissible (tax-allowable reserves, for example) but optional. In the case of the German heading however (gesetzliche Rücklage) the reserves are compulsory, i.e. statutory. The word 'statutory', however, itself presents some difficulty in the French and Spanish plans, because it could be confused with another heading (and another meaning) which we have translated as 'reserves provided for by the company's statutes'! I hope this illustrates the problem.My last point about possible misunderstandings brings me back to the question of context. Context is all-important. The same word can mean two totally different things in consecutive lines of text, and one of the problems with conventional dictionaries is that the user has to select the appropriate meaning on the basis of his own knowledge. The unique layout used in our Glossary, where each balance sheet heading, for example, is subdivided on a decimal system with further subdivisions down to the level of detail required, means that every heading is shown in its precise context and that an accountant does not necessarily even need to refer to the index at the back of each section, but can go straight to the particular section of the balance sheet with which he is concerned. The index, for its part, gives only the relevant account numbers where each term can be found, so that the reader is compelled to look up the term in the correct context of its surrounding headings. Also, as only one translation is given, the reader can safely assume that the translation is acceptable in the context shown (although other alternatives may be equally acceptable), and that there is no chance of selecting a wrong translation. (see Fig 6 for sample page from Section Three : Spain).I would just like to point out one or two other features of the Glossary. It has a spiral binding, which enables it to lie flat on the desk when opened at a particular page (compared with a bound volume, which requires all sorts of acrobatics to keep it open at the desired page). The form of spiral binding chosen also enables future supplements or updates to be inserted and replaced pages removed without difficulty. The index pages at the end of each section are on coloured paper for ease of location. At one time we considered using coloured card, but found that it was then difficult to open the Glossary anywhere apart from at the index! Marketing. So far I have told you a lot about the product, because the product is important for the market. However, the most important thing is that there must be a market for your product. I understood one of the earlier speakers to say that he was involved in preparing glossaries for which the prospective user was not yet known. This may be OK for large organisations, but I can assure you that you must be sure of your market before you start work on any project of a size such as ours. It is very much easier to produce or adapt a product for an existing market, than vice versa.Choice of publisher is important. We did approach one large publisher, but his expensive equipment and other overheads were such that he required a much larger volume run than we envisaged, and he also required us to underwrite any loss on unsold copies. As we were therefore being asked to bear the risk of losses anyway, we also decided to maximise any profits and publish ourselves. Publishing oneself does have disadvantages, arising mainly from lack of time and advertising coverage. This inevitably involves a reduction in volume sales, but this can be more than recouped by the higher return per copy sold. There are no other problems connected with publishing: ISBN numbers, VAT, etc. are all quite simple.Printing. We have an arrangement with a local printer, based on a photocopying process, which enables us to order very small print runs at a time as required, thus keeping down our capital costs and the costs associates with stocks of unsold copies.Selling price. We originally hoped to keep the retail price below £10, but once it moved above this level we added in several extras, such as a glossy cover, coloured index pages, stronger packaging etc. and allowed the price to move up relatively sharply. This policy proved correct.Advertising. Mailing shots through the post are fairly successful in this country, not so successful in some others. Reviews in professional magazines are useful, if you can get them, coupled with advertising in appropriate journals.My customers have proved very loyal, and conferences such as this have proved very useful opportunities for publicity. An international accountants' congress was held recently in Mexico, where we managed to arrange for 2,000 copies of a publicity leaflet to be inserted in delegates' folders -essentially our target market all brought together in one place! Time is the main problem when marketing your own glossary.Rewards: so what are the rewards? They are less easy to define than the problems, but are in any case not financial. The time taken to process a single order (despatch, invoicing, payment, accounting etc.) accounts for a large part of any gross profit margin which may have been allowed. No, the rewards are entirely based on personal satisfaction. I needed such a glossary for my own work (and am finding it increasingly useful all the time); we identified a gap in the supply situation and feel that we have gone some way towards filling it; and we have produced a product which has been well received by the specialists at whom it was aimed. The day I left to travel to this conference there were 4 orders in the post, including 3 from Japan and 1 from Italy. Anyone who works on his own gets a tremendous kick from that sort of thing. I do not want to boast, but I sent an inspection copy of the Glossary to the largest firm of accountants in Europe, who thereupon ordered 8 copies initially for their German organisation; they have now come back with a further order for 50 copies of the German and 10 copies of the French sections for use by their various offices in those and other countries, which I regard as the supreme accolade.There are other rewards too, of course. I have gained some new translation clients from the publicity generated by the Glossary, so that there has been a mutual spin-off between my translation activities and sales of the Glossary. I have also added enormously to my own personal knowledge of the subject, partly from the intensive exchange of ideas and opinions with my two colleagues on the project and partly from our use of the most up-to-date reference works (including the EEC Fourth Directive and the UK 1981 Companies Act).Our future plans include publication of an American supplement in the near future and supplements and updates to this existing volume when required. We also intend to publish other volumes dealing with other countries, but this will only be possible at infrequent intervals owing to our other work commitments.I started this talk with a dig at computers. I should now like to retract any sting which may have seemed to be behind those remarks. Nonetheless, I insist that, in present economic circumstances and with improving communications, both of which mean that self-employment will continue to grow, there are circumstances (particularly of size) where traditional methods are still the best. And I can assure you that the personal rewards are far greater.
:
For those of you who have come to hear about computers -and I realise that applies to most of you -now's the time to take that post-prandial nap! And if you have come to hear about computers as such, instead of terminology as such, then my talk is not for you anyway.Now computers are fine in the right circumstances and I do not want to give the impression that I'm against them. In fact we investigated the possibility of using them for production of our Glossary (updating, indexing, etc.) , and the layout of the glossary is specifically designed to facilitate transfer of the contents of the Glossary to computer (subject, of course, to copyright). We can all benefit from computerisation, as we are seeing at this conference. However, I think it is important to remember that computers are no more than a tool and are not always justified by the circumstances or the cost. Many of us cannot afford them, and it would be a pity if our expertise were wasted for that reason. I think it would therefore be a mistake to assume that term banks (in the general sense) in tomorrow's world will all be prepared by computer.My message will therefore be to encourage those individuals among you who may have thought about publishing your own dictionary or glossary of whatever, and may feel intimidated by all this talk of computers by the large organisations represented here. My project would be within the resources of every single one of you, either to produce or, for that matter, to purchase.In this particular session, you are also expecting to hear something about marketing. No doubt you will all have received some of those YES and NO envelopes in the post from time to time, enclosing messages like "Open this envelope for news of your Bonus Award!" or "Check inside for your free gift!", or a prize draw certificate promising you all sorts of fabulous winnings. How nice it would be to have volume sales to justify that sort of marketing expenditure! But I'm not going to talk about that either. What I am going to talk about is a very personal project which has a very practical application. So I make no apologies if my words are flavoured by my own personal feelings and reactions.As we are all so very pressed for time, I could in fact condense my talk into one sentence and then sit down -so please listen carefully to my next statement: "The problems of producing your own glossary are partly financial but mainly procedural (contents, formal, printing, marketing): the rewards are entirely personal and seldom financial." But I haven't come all this way just to tell you that, which does not tell you very much anyway. So I will expand on that sentence as far as time permits.I am an accountant. Not certified (although sometimes I think I should be), but chartered -which probably sounds at least as bad to those of our guests from abroad who do not know the exact meaning of those terms. The work of accountants is not totally unlike that of translators. I remember when I started my accountancy training many years ago, I asked a fellow trainee what it was like to be an accountant, to which he replied that it was like being a mushroom: they shut you away in a dark cupboard, and from time to time someone opens the door and throws in a load of manure! The glossary which I am going to talk about is therefore an accountancy glossary. (See Fig 1: Title page: Glossary of European Accounting Charts -Volume I). I do not propose to describe it in detail here. I must apologise if I lapse into accountancy jargon from time to time, but this is a specialist glossary and in any case the principles are the same for any specialist subject. Apart from this, accountancy impinges on every commercial company and therefore on any technical subject because, whatever its activity, every company has to keep accounts. Again, I am sure you all know the ordinary accountancy expressions, such as "reconciling the bank" (which means apologising for your overdraft) or "contingent liabilities" (for example a pregnant wife!).Our glossary is not a conventional dictionary. (Fig 2: Contents page for German section). It is a translation into English of lists of account headings and balance sheet classifications recommended in the countries concerned, and at this stage it gives UK English only, although an American supplement is planned in the near future. As you see in Fig 3: full contents page, it contains three sections -French, German and Spanish -and each section comprises an introduction (with some notes on historical background in the country concerned, etc.), the foreign headings listed in contextual order with the appropriate English translation next to each heading, and an alphabetical index in the original language concerned.There are four main steps to be taken when compiling the contents of such a glossary:1. Deciding what to include (or leave out). This was no problem in our case, because the accounting plans are standard in the countries concerned.2. Interpreting the correct meaning of the original. This is obvious and requires no further explanation here.3. Deciding on the appropriate translation for each term/heading. This is the crucial step, and in our case was even more difficult than usual because we have only given one translation in each case (I shall comment on this again when I refer to context in a moment). In his opening address to the conference, Brian Roden said that you have to work in committee; I agree. Obviously you can work alone, but in my experience an individual tends to get too close to his own preferred terminology and it is essential that such preferences be 'bounced' against the opinions of others. Each of the sections in our Glossary was prepared initially in draft by one member of our 'committee of three', and then every single term was discussed in committee until the most acceptable alternative was agreed.The type of alternative translation chosen can be one of several. Before giving a brief outline of these, it is worth reminding ourselves what the purpose of translation is. Briefly, it is to "enable the reader to understand the meaning of the original text in the context of that original text". With this mind, the translation may be either: If no translation can be found in any of the above three categories, there are two further types of alternatives which may be considered: d) A literal translation (producing a phrase which does not exist in English), in order to make it clear that the heading is unique or has a meaning which is peculiar to the country concerned. Some headings, for example, refer specifically to national legislation, and this fact should be indicated in the translation. e) An explanation, in order to give the heading its proper meaning and avoid misunderstanding. As you see from Fig 4, the French Plan subdivides "called up capital" between capital "non amorti" and capital "amorti". If we look at the phrase "non amorti", a non-accountant translator would probably translate this literally as 'unamortised capital' and be quite content with that. Unfortunately we do not talk about unamortised capital in the UK, so that such a translation would have no meaning to an English accountant. Your accountancy translator with no knowledge of the precise functioning of the French Plan would probably say to himself "Aha!" and translate the heading as 'unredeemed capital'. This, however, would be even more misleading, because unredeemed capital in the UK has a different application from that in the French Plan. In the UK, unredeemed capital is simply capital which has not been redeemed, and the phrase would be understood as such by an English accountant. In France, however, this heading is only used if some of the capital has been redeemed, and this heading is used for that part of the capital which remains after part has been redeemed. A rather cumbersome translation is therefore necessary: 'Balance of share capital remaining where part has been redeemed', but the English accountant reader knows exactly what is intended.a) Straight
Appendix:
| null | null | null | null | {
"paperhash": [],
"title": [],
"abstract": [],
"authors": [],
"arxiv_id": [],
"s2_corpus_id": [],
"intents": [],
"isInfluential": []
} | null | 507 | 0 | null | null | null | null | null | null | null | null |
c02406edd5c89a86d931c117f1469bd266379360 | 41067176 | null | Software for term banks | Many of the highly-developed term banks in operation today use purpose-built software. Some of the reasons for this choice are put forward, and the consequences examined. The case for using more readily available software -the benefits this would bring, and the penalties that must be paid -are then examined. 1. | {
"name": [
"Negus, A. E."
],
"affiliation": [
null
]
} | null | null | Proceedings of Translating and the Computer: Term banks for tomorrow{'}s world | 1982-11-01 | 0 | 0 | null | In reviewing software for term banks, a number of different viewpoints could be adopted; that of a system designer, responsible for producing and maintaining software, that of a service provider, or that of a user of the system, for example. Each of these groups will be more interested in different aspects, and in a short paper such as this, is is not possible to go into great detail when discussing the software that is, or might be, used with operational term banks. What is of most interest to the user of a term bank is the range of facilities that the software provides, and even this is of secondary importance when compared with the content of the term bank. Nevertheless, software is an extremely important component of a system, dictating not only how information may be extracted, but even how information may be represented.Which software is applicable in any given situation depends on many factors, and it is not possible to give generally applicable answers. In an attempt to indicate some of the factors that should be considered, this paper ignores the detailed operation of software for term banksfile structures, hardware and operating system and the like -and concentrates on what the system offers to the user. Existing software is discussed in terms of the capabilities and limitations of present day systems -factors which can depend on operational decisions as much as on any limitation in software -and possible future directions are discussed against a framework of increasing integration of services and a growing recognition of the opportunities offered by information technology.The current situation is that most of the bigger existing term banks use purpose-built software, although there are cases where general purpose information retrieval software is used.Although computerised term banks have been in existence for a number of years, there seems to be little agreement as to how they should operate, and if the present situation persists, their use will continue to be low. If term banks are to become widely used certain changes in practice will be necessary; changes which in turn have implications for the software that must be used for term bank operation.However, this divergence is not surprising. Each term bank has been created for a different purpose, and is working under different constraints to provide a different service to a different category of users and usage (7) . It might seem, at first sight, that the software requirements for each and every term bank could be similar, but in practice this has not been the case. Not that the causes of the divergence are unique to the field of term banks -the same sorts of argument are advanced in other areas for similar reasons, and, indeed some of the arguments are identical to those used elsewhere. In fact, what we have is a situation where the best way of operating a term bank has yet to be agreed, if, that is, a best way has been found. Consequently, individual systems can only be judged, at this time, in their own environment, as comparisons of one against the other would not only be unfair, they would be meaningless unless considered in a far broader framework.There are many factors which influence the choice of software, and there are conflicting requirements which must somehow be reconciled. To identify these conflicts, is it necessary to discuss in more detail, the framework in which term banks can operate.Term banks can be created for different reasons: to provide standard, well-defined terminology for use in a particular area, such as standards and codes of practice(9), or the preparation of instructional manuals; as a source databank for the production of dictionaries or glossaries; as a tool for translators, or for more than one of these. While the basic purpose is always the same -to facilitate the transfer of knowledge and information -the aims and the means are different.Even considering term banks created specifically for use by translators, there are different markets at which services are aimed. Factors which influence the manner in which a service is provided include: the range of subjects to be translated, the languages to be covered, the location of translators, and the type of documents to be translated, as well as more detailed points such as the creation, representation and retrieval of terminological entities.There are several ways in which a term bank can be used, partly dictated by the prime services to be provided, and partly by the environment in which it is to be used. Questions may be asked in different ways depending on the product desired.For the production of dictionaries or glossaries, for example, it is probable that batch processing will be the preferred method of access in most cases, but for other services other means may be preferred.Like other large systems, term banks are expensive to create and operate. Therefore, they must often serve a number of purposes, and may be required to produce a range of products and services, including production of printing masters, microfiche, computer printout, and online searching. In terms of software and system operation, the requirements for each of these products may be different, and it is probably not possible to provide all equally satisfactorily: certain compromises will usually be necessary.For use by translators there is one school of thought, exemplified by Eurodicautom (3, 4, 5) , and Termium which regards online interrogation as the prime means of access, although batch listings can be produced, while another, exemplified by Lexis (8) and Team (13, 14, 16) , regards batch searching as the norm, with online access being an added facility to be used with discretion. Some of the reasons for these differences in approach have nothing to do with the software itself, of course, except in so far as they do influence what is demanded of the software, being adopted for other reasons concerned with the environment in which users, that is translators, operate, and the nature, scope and purpose of the documents they are translating.Online interaction is clearly superior, other things, such as the quality of information offered, being equal, when:-Each document is worked on by only one translator, -A wide range of subjects is to be covered, -Translators are in several locations, including many working alone, perhaps freelance.Offline working, i.e. production of subject or document based lists at a central sire for provision in printed form to translators, can give advantages when:-Documents are often worked on by several translators (all can be given specialised vocabularies to aid consistency), -The consequences of errors can be catastrophic (e.g. instruction manuals, safety rules) -with online production of printed word lists it is possible to check and eliminate any inconsistencies or ambiguities which might not be apparent when looking at single entries online.However, it should be noted that even when offline working is the preferred mode, it can be advantageous to have online access, particularly for terminologists responsible for the creation and maintenance of the collection.When online access is provided there are different solutions that can be adopted. If language and specialised terminology were static and precise and used consistently there would, of course, be little problem. It would merely be necessary to enter the expression for which an equivalent was sought and the system could immediately give the correct answer. However, in practice this is often not the case; variant spellings can intrude and expressions are coined, modified and misused.It is therefore necessary for a system to offer several alternate solutions, none of which may, in the event, turn out to be helpful. Thus, a system may offer lists of alphabetically adjacent terms for consideration, or nearest matches, calculated using predetermined embedded algorithms, may be presented singly, as is the case with Eurodicautom.On multilingual term banks there are two ways in which the data can be arranged; which is preferred depends on the content of the bank, the way in which it is created, and the manner in which it is used. If there is one main language for the system, terminological records, and the software to handle them, can be quite different from what might be needed if all languages are equally important. Thus, in the first instance, all entries can refer directly to the main languages, with the seeking of equivalents in two of the minor languages being carried out in two stages, using the main language as a 'switching' language. This is the approach adopted by Lexis. Alternatively, for each concept represented, terms from several languages may be collected together in a single record. This can clearly introduce more uncertainty into the provision of equivalents than would arise if only two languages were linked. This is the solution adopted by Eurodicautom.As has been indicated throughout this section there are many different techniques that can be adopted in the operation of term banks. While all the alternatives are not necessarily mutually exclusive, it is, nevertheless unusual to find systems offering a wide range of possibilities. Partly this is because of positive decisions by those responsible for operating the term bank, but often the software in use allows little choice, once initial design decisions have been made. It is not easy to add additional facilities, even when a need is recognised, and sometimes it may be impossible to do so by modifying or extending existing software.One other constraint should be mentioned: in spite of some claims to the contrary, software is often linked to specific hardware, certainly if that software is to perform to its best ability. Thus the choice of software may well be limited for reasons outside the direct control of the actual operator of the term bank. For example, it comes as no surprise to learn that Team runs on Siemens hardware.Before looking at possible future trends it is worth recalling some rather obvious and well-known points about software. Software is, of course, a critical part of any computer-based system, but is it important to remember that it is no more than a tool. Often software dictates the course that a service follows, whereas the opposite should be the case. The development in computer hardware has been remarkable over the past decade or so, but software has hardly kept pace. Many operating systems, for example, are firmly based on systems originally developed in the 1960s, and many so-called new developments, such as relational database systems, first proposed over ten years ago, have been many years in gestation.One of the reasons for the slowness in the development of software is that no new techniques have been developed to aid software production, which remains a demanding, labour intensive activity. Furthermore, as hardware performance has increased, and costs lowered, software has become more and more complex to take advantage of these improvements. It has, therefore, become increasingly difficult and expensive to create or modify existing software, and standards have had an even more significant role to play in allowing interworking between different software products.The range of services needed, and particularly the ways in which those services are likely to be used, is certain to change considerably in the coming years. Much is made of information technology and the opportunities it offers for new ways of working, and translation is one profession that is better placed to take advantage of new technologies than many others; indeed many translators have been working, without the benefits of these new innovations, in the very manner which their introduction, we are told, makes so attractive. Developments that are clearly attractive include cheaper and more reliable telecommunications, perhaps using public data networks, the increased availability of word processors, developments such as increased capacity floppy discs, fixed Winchester discs, and optical discs. It can be foreseen that many translations will be created using word processors, perhaps even using split screens to display original and translation simultaneously, and the possibility of making online searches of term banks from the same terminal while making the translation is quite realistic. Even voice input and output can be envisaged in the longer term. Distribution of specialised vocabularies, perhaps on optical discs is another possibility. While it is technically possible, today, to use the same machine for word processing and accessing an online computer system over a network, these activities must usually be carried out as two distinct and separate operations, so, in a sense, the manner in which a term bank can be interrogated online is immaterial. However, if the greatest benefit is to be achieved, integration of word processing and online searching for terminology is desirable, and if this is to take place, many implications for online system design have to be considered.As mentioned in the introduction, the longer established term banks tend to use purpose built software, partly because nothing generally available at the time was found to be suitable, and partly because each is aimed at providing a range of services not found elsewhere, using terminological records and searching methods which are more or less unique. Thus there are differences of opinion as to what should be offered to the translator, as well as what comprises a terminological record, and how the terminological records should be searched and presented to the user. All this means that exchange of data between systems is neither as easy nor as useful as it might be, and, consequently, restricts the amount of information that any one system can offer in a coherent manner. But the consequences of this diversity in approach go further. Leaving aside any commercial or other restrictions on access that might exist, the potential user of the wealth of terminological data that is already available from term banks is faced with the difficulty of learning how to use each system, and of comprehending the information that is supplied in terms of recognising its strengths and limitations.What is needed, before suitable software can be developed or selected, is some agreement on the practice, as well as principles and theory, of terminological control.The goals are well known, but the most effective methods are not. One of the most fundamental difficulties rests in the form of representation that is used for storing and searching the data. What is stored, is, of course, an orthographic representation of the term, whereas what is really sought is the concept represented by the given characters in the source text, and a means of representing that concept in the target language. Thus, differences in spelling, for example, or other changes that occur regularly in language, cause difficulties. Some techniques which may help to overcome some of these difficulties have been developed for other reasons, and it may be that some of these, orthographic and phonemic approximation techniques, for example, could be usefully applied.In the absence of any such agreement and possibly even afterwards, it is desirable that all systems should attempt to maintain the greatest flexibility in their approach. However, this is difficult to achieve where specially created software is concerned; there is an inevitable tendency to provide what is definitely required at the time of program specification, perhaps giving little thought to what services might be required, or facilities demanded, at some indeterminate time in the future.It can be shown that many of the features needed for term bank operation can be perfectly satisfactorily provided using proprietary information retrieval packages; indeed, some operational systems do just this. There are, of course, penalties, as file structures provided for in the original design of the retrieval system must be employed, as must the retrieval facilities. Whether such an approach produces anything that is worse than could be provided otherwise is extremely doubtful, in fact the result is likely to be superior in the long term, as proprietary software from a reputable supplier, is generally far more hospitable to future changes and developments than are purpose-built program suites. All that can be said with certainty, is that the resulting service may well look different! Software products used for term banks should conform, as far as possible, to agreed standards and conventions, so that interworking with other systems, be it for exchange of data, or to allow word processors to search the data bank, becomes easier, and so that changes to new, improved software can be made without disrupting the smooth running of the system, or inconveniencing users.For the would-be operator of a term bank, many of the factors that must be considered, apart from whether the software will actually do the job, are just the same as for any other software selection exercise. They include the documentation supplied with the program, the support that will be given by the supplier to ensure that it will continue to operate satisfactorily with new releases of an operating system, for example, and the ease with which the service can be switched to different hardware or software, should the need arise. All of these make selection of a general purpose package more attractive than the use of a purpose-built system, which may perform the task perfectly adequately, yet is unsupported in any real sense. | null | null | null | It will be a long time before good, widely-acceptable software becomes available as there is no consensus as to what should be provided, nor is there yet any substantial market for such a product.If there is to be any progress, the first requirement is for users and creators/promoters of term banks to get together, rather more than they appear to have responded to INFOTERM initiatives, and decide what it is they are trying to do, and how to do it. It will be necessary to go further that the INFOTERM proposals themselves have gone (1, 2) .Otherwise, what incentive is there for anyone to spend valuable time and effort providing a solution that may not be used in any event? -software is extremely expensive to produce and maintain!In the meantime, the most satisfactory solutions may well be achieved using standard general purpose information retrieval packages, which while having limitations are generally able to provide a flexible solution and respond to changing demands. More importantly, by allowing the system operator to divorce data acquisition from the rigid requirements of a software system, they provide for easier changes to future, perhaps unforeseen, solutions in the years to come. | Main paper:
the current situation:
The current situation is that most of the bigger existing term banks use purpose-built software, although there are cases where general purpose information retrieval software is used.Although computerised term banks have been in existence for a number of years, there seems to be little agreement as to how they should operate, and if the present situation persists, their use will continue to be low. If term banks are to become widely used certain changes in practice will be necessary; changes which in turn have implications for the software that must be used for term bank operation.However, this divergence is not surprising. Each term bank has been created for a different purpose, and is working under different constraints to provide a different service to a different category of users and usage (7) . It might seem, at first sight, that the software requirements for each and every term bank could be similar, but in practice this has not been the case. Not that the causes of the divergence are unique to the field of term banks -the same sorts of argument are advanced in other areas for similar reasons, and, indeed some of the arguments are identical to those used elsewhere. In fact, what we have is a situation where the best way of operating a term bank has yet to be agreed, if, that is, a best way has been found. Consequently, individual systems can only be judged, at this time, in their own environment, as comparisons of one against the other would not only be unfair, they would be meaningless unless considered in a far broader framework.There are many factors which influence the choice of software, and there are conflicting requirements which must somehow be reconciled. To identify these conflicts, is it necessary to discuss in more detail, the framework in which term banks can operate.Term banks can be created for different reasons: to provide standard, well-defined terminology for use in a particular area, such as standards and codes of practice(9), or the preparation of instructional manuals; as a source databank for the production of dictionaries or glossaries; as a tool for translators, or for more than one of these. While the basic purpose is always the same -to facilitate the transfer of knowledge and information -the aims and the means are different.Even considering term banks created specifically for use by translators, there are different markets at which services are aimed. Factors which influence the manner in which a service is provided include: the range of subjects to be translated, the languages to be covered, the location of translators, and the type of documents to be translated, as well as more detailed points such as the creation, representation and retrieval of terminological entities.There are several ways in which a term bank can be used, partly dictated by the prime services to be provided, and partly by the environment in which it is to be used. Questions may be asked in different ways depending on the product desired.For the production of dictionaries or glossaries, for example, it is probable that batch processing will be the preferred method of access in most cases, but for other services other means may be preferred.Like other large systems, term banks are expensive to create and operate. Therefore, they must often serve a number of purposes, and may be required to produce a range of products and services, including production of printing masters, microfiche, computer printout, and online searching. In terms of software and system operation, the requirements for each of these products may be different, and it is probably not possible to provide all equally satisfactorily: certain compromises will usually be necessary.For use by translators there is one school of thought, exemplified by Eurodicautom (3, 4, 5) , and Termium which regards online interrogation as the prime means of access, although batch listings can be produced, while another, exemplified by Lexis (8) and Team (13, 14, 16) , regards batch searching as the norm, with online access being an added facility to be used with discretion. Some of the reasons for these differences in approach have nothing to do with the software itself, of course, except in so far as they do influence what is demanded of the software, being adopted for other reasons concerned with the environment in which users, that is translators, operate, and the nature, scope and purpose of the documents they are translating.Online interaction is clearly superior, other things, such as the quality of information offered, being equal, when:-Each document is worked on by only one translator, -A wide range of subjects is to be covered, -Translators are in several locations, including many working alone, perhaps freelance.Offline working, i.e. production of subject or document based lists at a central sire for provision in printed form to translators, can give advantages when:-Documents are often worked on by several translators (all can be given specialised vocabularies to aid consistency), -The consequences of errors can be catastrophic (e.g. instruction manuals, safety rules) -with online production of printed word lists it is possible to check and eliminate any inconsistencies or ambiguities which might not be apparent when looking at single entries online.However, it should be noted that even when offline working is the preferred mode, it can be advantageous to have online access, particularly for terminologists responsible for the creation and maintenance of the collection.When online access is provided there are different solutions that can be adopted. If language and specialised terminology were static and precise and used consistently there would, of course, be little problem. It would merely be necessary to enter the expression for which an equivalent was sought and the system could immediately give the correct answer. However, in practice this is often not the case; variant spellings can intrude and expressions are coined, modified and misused.It is therefore necessary for a system to offer several alternate solutions, none of which may, in the event, turn out to be helpful. Thus, a system may offer lists of alphabetically adjacent terms for consideration, or nearest matches, calculated using predetermined embedded algorithms, may be presented singly, as is the case with Eurodicautom.On multilingual term banks there are two ways in which the data can be arranged; which is preferred depends on the content of the bank, the way in which it is created, and the manner in which it is used. If there is one main language for the system, terminological records, and the software to handle them, can be quite different from what might be needed if all languages are equally important. Thus, in the first instance, all entries can refer directly to the main languages, with the seeking of equivalents in two of the minor languages being carried out in two stages, using the main language as a 'switching' language. This is the approach adopted by Lexis. Alternatively, for each concept represented, terms from several languages may be collected together in a single record. This can clearly introduce more uncertainty into the provision of equivalents than would arise if only two languages were linked. This is the solution adopted by Eurodicautom.As has been indicated throughout this section there are many different techniques that can be adopted in the operation of term banks. While all the alternatives are not necessarily mutually exclusive, it is, nevertheless unusual to find systems offering a wide range of possibilities. Partly this is because of positive decisions by those responsible for operating the term bank, but often the software in use allows little choice, once initial design decisions have been made. It is not easy to add additional facilities, even when a need is recognised, and sometimes it may be impossible to do so by modifying or extending existing software.One other constraint should be mentioned: in spite of some claims to the contrary, software is often linked to specific hardware, certainly if that software is to perform to its best ability. Thus the choice of software may well be limited for reasons outside the direct control of the actual operator of the term bank. For example, it comes as no surprise to learn that Team runs on Siemens hardware.
future possibilities:
Before looking at possible future trends it is worth recalling some rather obvious and well-known points about software. Software is, of course, a critical part of any computer-based system, but is it important to remember that it is no more than a tool. Often software dictates the course that a service follows, whereas the opposite should be the case. The development in computer hardware has been remarkable over the past decade or so, but software has hardly kept pace. Many operating systems, for example, are firmly based on systems originally developed in the 1960s, and many so-called new developments, such as relational database systems, first proposed over ten years ago, have been many years in gestation.One of the reasons for the slowness in the development of software is that no new techniques have been developed to aid software production, which remains a demanding, labour intensive activity. Furthermore, as hardware performance has increased, and costs lowered, software has become more and more complex to take advantage of these improvements. It has, therefore, become increasingly difficult and expensive to create or modify existing software, and standards have had an even more significant role to play in allowing interworking between different software products.The range of services needed, and particularly the ways in which those services are likely to be used, is certain to change considerably in the coming years. Much is made of information technology and the opportunities it offers for new ways of working, and translation is one profession that is better placed to take advantage of new technologies than many others; indeed many translators have been working, without the benefits of these new innovations, in the very manner which their introduction, we are told, makes so attractive. Developments that are clearly attractive include cheaper and more reliable telecommunications, perhaps using public data networks, the increased availability of word processors, developments such as increased capacity floppy discs, fixed Winchester discs, and optical discs. It can be foreseen that many translations will be created using word processors, perhaps even using split screens to display original and translation simultaneously, and the possibility of making online searches of term banks from the same terminal while making the translation is quite realistic. Even voice input and output can be envisaged in the longer term. Distribution of specialised vocabularies, perhaps on optical discs is another possibility. While it is technically possible, today, to use the same machine for word processing and accessing an online computer system over a network, these activities must usually be carried out as two distinct and separate operations, so, in a sense, the manner in which a term bank can be interrogated online is immaterial. However, if the greatest benefit is to be achieved, integration of word processing and online searching for terminology is desirable, and if this is to take place, many implications for online system design have to be considered.As mentioned in the introduction, the longer established term banks tend to use purpose built software, partly because nothing generally available at the time was found to be suitable, and partly because each is aimed at providing a range of services not found elsewhere, using terminological records and searching methods which are more or less unique. Thus there are differences of opinion as to what should be offered to the translator, as well as what comprises a terminological record, and how the terminological records should be searched and presented to the user. All this means that exchange of data between systems is neither as easy nor as useful as it might be, and, consequently, restricts the amount of information that any one system can offer in a coherent manner. But the consequences of this diversity in approach go further. Leaving aside any commercial or other restrictions on access that might exist, the potential user of the wealth of terminological data that is already available from term banks is faced with the difficulty of learning how to use each system, and of comprehending the information that is supplied in terms of recognising its strengths and limitations.What is needed, before suitable software can be developed or selected, is some agreement on the practice, as well as principles and theory, of terminological control.The goals are well known, but the most effective methods are not. One of the most fundamental difficulties rests in the form of representation that is used for storing and searching the data. What is stored, is, of course, an orthographic representation of the term, whereas what is really sought is the concept represented by the given characters in the source text, and a means of representing that concept in the target language. Thus, differences in spelling, for example, or other changes that occur regularly in language, cause difficulties. Some techniques which may help to overcome some of these difficulties have been developed for other reasons, and it may be that some of these, orthographic and phonemic approximation techniques, for example, could be usefully applied.In the absence of any such agreement and possibly even afterwards, it is desirable that all systems should attempt to maintain the greatest flexibility in their approach. However, this is difficult to achieve where specially created software is concerned; there is an inevitable tendency to provide what is definitely required at the time of program specification, perhaps giving little thought to what services might be required, or facilities demanded, at some indeterminate time in the future.It can be shown that many of the features needed for term bank operation can be perfectly satisfactorily provided using proprietary information retrieval packages; indeed, some operational systems do just this. There are, of course, penalties, as file structures provided for in the original design of the retrieval system must be employed, as must the retrieval facilities. Whether such an approach produces anything that is worse than could be provided otherwise is extremely doubtful, in fact the result is likely to be superior in the long term, as proprietary software from a reputable supplier, is generally far more hospitable to future changes and developments than are purpose-built program suites. All that can be said with certainty, is that the resulting service may well look different! Software products used for term banks should conform, as far as possible, to agreed standards and conventions, so that interworking with other systems, be it for exchange of data, or to allow word processors to search the data bank, becomes easier, and so that changes to new, improved software can be made without disrupting the smooth running of the system, or inconveniencing users.For the would-be operator of a term bank, many of the factors that must be considered, apart from whether the software will actually do the job, are just the same as for any other software selection exercise. They include the documentation supplied with the program, the support that will be given by the supplier to ensure that it will continue to operate satisfactorily with new releases of an operating system, for example, and the ease with which the service can be switched to different hardware or software, should the need arise. All of these make selection of a general purpose package more attractive than the use of a purpose-built system, which may perform the task perfectly adequately, yet is unsupported in any real sense.
conclusion:
It will be a long time before good, widely-acceptable software becomes available as there is no consensus as to what should be provided, nor is there yet any substantial market for such a product.If there is to be any progress, the first requirement is for users and creators/promoters of term banks to get together, rather more than they appear to have responded to INFOTERM initiatives, and decide what it is they are trying to do, and how to do it. It will be necessary to go further that the INFOTERM proposals themselves have gone (1, 2) .Otherwise, what incentive is there for anyone to spend valuable time and effort providing a solution that may not be used in any event? -software is extremely expensive to produce and maintain!In the meantime, the most satisfactory solutions may well be achieved using standard general purpose information retrieval packages, which while having limitations are generally able to provide a flexible solution and respond to changing demands. More importantly, by allowing the system operator to divorce data acquisition from the rigid requirements of a software system, they provide for easier changes to future, perhaps unforeseen, solutions in the years to come.
introduction:
In reviewing software for term banks, a number of different viewpoints could be adopted; that of a system designer, responsible for producing and maintaining software, that of a service provider, or that of a user of the system, for example. Each of these groups will be more interested in different aspects, and in a short paper such as this, is is not possible to go into great detail when discussing the software that is, or might be, used with operational term banks. What is of most interest to the user of a term bank is the range of facilities that the software provides, and even this is of secondary importance when compared with the content of the term bank. Nevertheless, software is an extremely important component of a system, dictating not only how information may be extracted, but even how information may be represented.Which software is applicable in any given situation depends on many factors, and it is not possible to give generally applicable answers. In an attempt to indicate some of the factors that should be considered, this paper ignores the detailed operation of software for term banksfile structures, hardware and operating system and the like -and concentrates on what the system offers to the user. Existing software is discussed in terms of the capabilities and limitations of present day systems -factors which can depend on operational decisions as much as on any limitation in software -and possible future directions are discussed against a framework of increasing integration of services and a growing recognition of the opportunities offered by information technology.
Appendix:
| null | null | null | null | {
"paperhash": [],
"title": [],
"abstract": [],
"authors": [],
"arxiv_id": [],
"s2_corpus_id": [],
"intents": [],
"isInfluential": []
} | null | 507 | 0 | null | null | null | null | null | null | null | null |
34c706d63907d5a2e6b8ff5b1031430f3953cad8 | 237295781 | null | Session 5: Creating Term Banks. Chairman{'}s remarks | An early but stimulating start to the second day of the conference sketched in some background to the creation of term banks: whether software should be off-the-peg or made-to-measure (Alan Negus); hardware in the translation services of the future (John Brook); and, not least, finding the money for your term bank's computer (John Alvey). | {
"name": [
"Lawson, Veronica"
],
"affiliation": [
null
]
} | null | null | Proceedings of Translating and the Computer: Term banks for tomorrow{'}s world | 1982-11-01 | 0 | 0 | null | null | null | null | null | Translation Consultant, London, United Kingdom An early but stimulating start to the second day of the conference sketched in some background to the creation of term banks: whether software should be off-the-peg or made-to-measure (Alan Negus); hardware in the translation services of the future (John Brook); and, not least, finding the money for your term bank's computer (John Alvey).Hardware, as Negus said, had come a long way since the early days of term banks, when terminals were few and needed frequent application of the soldering iron. Software, however, had not kept pace and was now more complex and hence harder and dearer to produce or adapt. We were still at the very beginning of a learning curve in the application to term banks, and future problems, yet unforeseen, could be solved more easily if term banks used proprietary information retrieval packages instead of purpose-built software.A good example of the progress in hardware in recent years was the Xerox 8000 family of electronic office products described by Brook. Its professional workstation replaced much of the office furniture and fittings with an "electronic desktop", and also offered electronic mail in-house and outside. It coped with graphics and most European languages, including Greek, as well as Japanese, mathematical symbols, scientific formulae and various special characters -facilities which were useful not only in word processing, but in online searching. The computer capacity might not, however, be sufficient for a term bank. Incompatibility with other systems, while still a problem (particularly in big organisations in which different departments ordered different equipment), was slowly diminishing.This system had won a prize for user-friendliness from the computer press the week before, largely because of the "mouse" used to point its cursor. (Other Xerox research had produced a product which it was suggested might be called the Worm, to eat up the Apple personal computer market!) As Alvey said, new developments in software and microcomputers should soon make it easy even for small users to have their own term bank and word processing system. Indeed, a couple of weeks after the conference, VisiCorp announced a much cheaper, micro-based electronic desktop complete with "mouse".The translating profession, as Negus reminded us, was better placed than most to take advantage of the new technology. For maximum benefit, however, online searching for terminology must be integrated with word processing. According to Alvey, indeed, word processing could be the key to a term bank. A translation department was normally regarded by management as a basic service and must therefore work within a tight budget. It was usually, however, an obvious candidate for word processing, and if a combined word processor and minicomputer could be proved economic, the department would have a "free" computer on which to store its terminology. It is a pleasing prospect, when so many translation services, even large ones, are "barely beyond the artisan stage" in terminology. | Main paper:
veronica lawson:
Translation Consultant, London, United Kingdom An early but stimulating start to the second day of the conference sketched in some background to the creation of term banks: whether software should be off-the-peg or made-to-measure (Alan Negus); hardware in the translation services of the future (John Brook); and, not least, finding the money for your term bank's computer (John Alvey).Hardware, as Negus said, had come a long way since the early days of term banks, when terminals were few and needed frequent application of the soldering iron. Software, however, had not kept pace and was now more complex and hence harder and dearer to produce or adapt. We were still at the very beginning of a learning curve in the application to term banks, and future problems, yet unforeseen, could be solved more easily if term banks used proprietary information retrieval packages instead of purpose-built software.A good example of the progress in hardware in recent years was the Xerox 8000 family of electronic office products described by Brook. Its professional workstation replaced much of the office furniture and fittings with an "electronic desktop", and also offered electronic mail in-house and outside. It coped with graphics and most European languages, including Greek, as well as Japanese, mathematical symbols, scientific formulae and various special characters -facilities which were useful not only in word processing, but in online searching. The computer capacity might not, however, be sufficient for a term bank. Incompatibility with other systems, while still a problem (particularly in big organisations in which different departments ordered different equipment), was slowly diminishing.This system had won a prize for user-friendliness from the computer press the week before, largely because of the "mouse" used to point its cursor. (Other Xerox research had produced a product which it was suggested might be called the Worm, to eat up the Apple personal computer market!) As Alvey said, new developments in software and microcomputers should soon make it easy even for small users to have their own term bank and word processing system. Indeed, a couple of weeks after the conference, VisiCorp announced a much cheaper, micro-based electronic desktop complete with "mouse".The translating profession, as Negus reminded us, was better placed than most to take advantage of the new technology. For maximum benefit, however, online searching for terminology must be integrated with word processing. According to Alvey, indeed, word processing could be the key to a term bank. A translation department was normally regarded by management as a basic service and must therefore work within a tight budget. It was usually, however, an obvious candidate for word processing, and if a combined word processor and minicomputer could be proved economic, the department would have a "free" computer on which to store its terminology. It is a pleasing prospect, when so many translation services, even large ones, are "barely beyond the artisan stage" in terminology.
Appendix:
| null | null | null | null | {
"paperhash": [],
"title": [],
"abstract": [],
"authors": [],
"arxiv_id": [],
"s2_corpus_id": [],
"intents": [],
"isInfluential": []
} | null | 507 | 0 | null | null | null | null | null | null | null | null |
d0556c05ed9dbca67080f9c3f829ec4eb02b3efa | 237295776 | null | Session 7: Terminology on the Market. Summary of discussion | The speaker was asked if the tables in the Glossary of European Accounting Charts could be used for other countries using the same languages, e.g. French to African francophone countries? And further if it was planned to extend the glossary to these countries. | {
"name": [
"Glover, Wendy"
],
"affiliation": [
null
]
} | null | null | Proceedings of Translating and the Computer: Term banks for tomorrow{'}s world | 1982-11-01 | 0 | 0 | null | null | null | null | The speaker was asked if the tables in the Glossary of European Accounting Charts could be used for other countries using the same languages, e.g. French to African francophone countries? And further if it was planned to extend the glossary to these countries.We were told that since accountancy terms and requirements vary from country to country the glossary only applies in the named countries although it could be used with caution to obtain a rough idea of the terms for other areas. But English accounting terms could not, for example, be applied in the USA. Other countries may be covered in subsequent volumes but any such project takes 2 years. Volume 1 will of course, be updated.Questions were asked regarding LEXIS, the legal information retrieval system (not to be confused with LEXIS the German data bank, the subject of a subsequent paper -Editor). Delegates were told that the system covers US law as well as English, Welsh, French law etc. The system software for this vast data store of legal cases was not for sale. Users hire a LEXIS terminal linked by private cable to the data store. The cost of access if £1.20 per minute which compares favourably with lawyers' fees. LEXIS trains the users and maintains its terminals. The system is able to output complete major collections of legal cases in as many minutes as lawyers take months. | null | Main paper:
:
The speaker was asked if the tables in the Glossary of European Accounting Charts could be used for other countries using the same languages, e.g. French to African francophone countries? And further if it was planned to extend the glossary to these countries.We were told that since accountancy terms and requirements vary from country to country the glossary only applies in the named countries although it could be used with caution to obtain a rough idea of the terms for other areas. But English accounting terms could not, for example, be applied in the USA. Other countries may be covered in subsequent volumes but any such project takes 2 years. Volume 1 will of course, be updated.Questions were asked regarding LEXIS, the legal information retrieval system (not to be confused with LEXIS the German data bank, the subject of a subsequent paper -Editor). Delegates were told that the system covers US law as well as English, Welsh, French law etc. The system software for this vast data store of legal cases was not for sale. Users hire a LEXIS terminal linked by private cable to the data store. The cost of access if £1.20 per minute which compares favourably with lawyers' fees. LEXIS trains the users and maintains its terminals. The system is able to output complete major collections of legal cases in as many minutes as lawyers take months.
Appendix:
| null | null | null | null | {
"paperhash": [],
"title": [],
"abstract": [],
"authors": [],
"arxiv_id": [],
"s2_corpus_id": [],
"intents": [],
"isInfluential": []
} | null | 507 | 0 | null | null | null | null | null | null | null | null |
17b4b4af0d13fcc4b3cdd4c59dd672cb89c6efc9 | 51845689 | null | The {BSI} {ROOT} Thesaurus: does it serve translators? | Technical thesauri can be of great use to translators although there are pitfalls. Some different types of thesaurus are described briefly, and the value of ROOT to terminologists is outlined. To my horror I find myself addressing an audience of terminologists on a subject whose own terminology is, er, woolly? loose? vague? ambiguous? misty? nebulous? perplexed? mysterious? mystic? mystical?... or is it hidden? recondite? abstruse? or even transcendental?. No prizes for guessing whose thesaurus I consulted to find that collection of terms. If I am not mistaken it was Roget who coined the term "thesaurus" to describe his treasure-store of terminology. He could not have foreseen how his own term would be borrowed, adapted, or even perverted, to end up being used for several different concepts which are close enough to have something in common, but distinct enough to cause endless confusion in conversations where the term is not defined. My own remarks will be limited in the main to technical thesauri, but still I'd better start by indicating some meanings of "thesaurus". | {
"name": [
"Dextre, S. G."
],
"affiliation": [
null
]
} | null | null | Proceedings of Translating and the Computer: Term banks for tomorrow{'}s world | 1982-11-01 | 2 | 0 | null | To my horror I find myself addressing an audience of terminologists on a subject whose own terminology is, er, woolly? loose? vague? ambiguous? misty? nebulous? perplexed? mysterious? mystic? mystical?... or is it hidden? recondite? abstruse? or even transcendental?. No prizes for guessing whose thesaurus I consulted to find that collection of terms. If I am not mistaken it was Roget who coined the term "thesaurus" to describe his treasure-store of terminology. He could not have foreseen how his own term would be borrowed, adapted, or even perverted, to end up being used for several different concepts which are close enough to have something in common, but distinct enough to cause endless confusion in conversations where the term is not defined. My own remarks will be limited in the main to technical thesauri, but still I'd better start by indicating some meanings of "thesaurus". Table 1 : Some uses of the term "thesaurus" a) Simple term list for a particular information retrieval system. (Shown only those terms which are "allowed" for indexing and searching). b) Elaborate term list for a particular information retrieval system. (Shows "allowed" terms plus instructions for dealing with "non-preferred" terms plus guidance as to relationships between terms.) c) Elaborate term list for general application. (Structured as in b) but intended to give inspiration to people searching unfamiliar bibliographic databases.) d) Machine-held list of "allowed" character strings for database validation. Table 1 shows only some of the meanings. While both a) and d) are fairly straightforward lists, b) and hence c) can be infinitely variable in format. Some list the terms in alphabetical order, some in subject order, some in both, some even list the terms in three or four orders including permuted indexes, hierarchical listings and so on. Those in subject order can reassemble straightforward classification schemes, can involve elaborate symbology to show relationships between terms, or can lay terms out on charts called arrowgraphs or association maps. (For examples of different layouts, see figures 1, 3 and 4). Now in principle, a), b), c) and d) are quite distinct from each other. In real life, however, any technical thesaurus you happen to pick up is likely to have its own unique mix of the features in any of the four categories.The BSI ROOT Thesaurus, in being quite exceptional, is no exception to that rule. It was designed, not exactly for one particular database, but for any bibliographic database holding standards or technical regulations. This effectively made its subject area so broad that it could also be used for other people's databases covering a wide range of technologies. So it was never a type a) thesaurus; it started off as a b), grew to be a c) and now that we are putting it up on computer it will become a d) as well."So what?" you may say. Looking through the journal Terminologie recently, I chanced on the following paragraphs in an article by Prof. Helmut Felber (3):The meaning (concept) of a term is dependent on the system of concepts. The term keeps the particular meaning also within the subject-context, i.e. the meaning it has in the system of concepts.The thesaurus word is a word -mostly a term -or a name, which is used as a descriptor or non-descriptor for information retrieval.A descriptor is a thesaurus word, which is prescribed for use in the information system. For this purpose, a term or name is selected from the existing synonyms or quasi-synonyms. The meaning of this term is thus fixed for this information system, and may deviate from its general usage within a technical language. For this reason, a thesaurus cannot be used for technical translations.As I had just been given the job of addressing you translators on the BSI ROOT Thesaurus, I must confess a crease or two appeared on my forehead.One's first reaction is of course that Prof. Felber is absolutely right. A thesaurus designed for a specific in-house application can and should manipulate terms in unconventional ways, where this assists with effective retrieval of information.For example, one of the functions of such a thesaurus is to make sure that all users (indexers and searchers) use the same term for the same concepts. Thus there are instructions such as: Notice that while the first two examples show undeniable synonyms, the other three are different. Not all radiators are heaters, and not all heaters are radiators. In ROOT, for example, these two could emphatically not be considered synonyms because we need to distinguish the concepts. But conceivably the thesaurus of a vegetable-growers association could usefully lump all sorts of heaters together; after all, they so not have to cope with literature about car radiators. Similarly, many thesauri could justifiably use the term "Sports equipment" to cover everything from water wings to a billiard cue.In general, in-house thesauri [types a) and b)], in the name of effectiveness and efficiency, aim to cut down the number of "allowed" terms of descriptors, by controlling true synonyms and quasi-synonyms and by collecting together under one descriptor any narrower terms considered two specific for inclusion.But for type c) applications the matter changes. True synonyms must still be controlled, but more caution is needed with quasi-synonyms. Now that online bibliographic databases are springing up all over the place there is an increasing demand for thesauri which will help searchers to think of other words for the concept they have in mind. Some databases have their own thesauri (used by their indexers and available to searchers); others have to be accessed by free language terms, and it can be very difficult trying to think of the words someone else might have used to express the solution to a problem you had in mind. Hence the demand for the so-called "search thesaurus".The "search thesaurus" does not have to invoke quasi-synonyms or subsume specific terms under a broader heading. It is more like Roget's thesaurus in giving inspiration as to alternative terminology. As compared with a dictionary, for translation purposes it has the disadvantage of not showing a variety of definitions for a single term, but if it has a subject (or hierarchical) section then it has the advantage of showing whole arrays of related terms on one page.To sum up, when using a thesaurus the translator must be cautious, particularly about quasi-synonyms. But, with respect to Prof. Felber, caution should not prevent his taking advantage of the many excellent technical thesauri available today. (See also reference 4).Finally I must return to ROOT. ROOT does invoke quasi-synonyms from time to time (for example, see Fire alarms = Fire sirens = Smoke alarms in Figure 1 ), but avoids gross distortion of terminology. Whenever possible it follows definitions contained in British Standards. The Subject display (see Figure 1 , showing a small part of the schedule for Safety measures) goes to great pains to lay terms out in a way which | null | null | null | null | Main paper:
:
To my horror I find myself addressing an audience of terminologists on a subject whose own terminology is, er, woolly? loose? vague? ambiguous? misty? nebulous? perplexed? mysterious? mystic? mystical?... or is it hidden? recondite? abstruse? or even transcendental?. No prizes for guessing whose thesaurus I consulted to find that collection of terms. If I am not mistaken it was Roget who coined the term "thesaurus" to describe his treasure-store of terminology. He could not have foreseen how his own term would be borrowed, adapted, or even perverted, to end up being used for several different concepts which are close enough to have something in common, but distinct enough to cause endless confusion in conversations where the term is not defined. My own remarks will be limited in the main to technical thesauri, but still I'd better start by indicating some meanings of "thesaurus". Table 1 : Some uses of the term "thesaurus" a) Simple term list for a particular information retrieval system. (Shown only those terms which are "allowed" for indexing and searching). b) Elaborate term list for a particular information retrieval system. (Shows "allowed" terms plus instructions for dealing with "non-preferred" terms plus guidance as to relationships between terms.) c) Elaborate term list for general application. (Structured as in b) but intended to give inspiration to people searching unfamiliar bibliographic databases.) d) Machine-held list of "allowed" character strings for database validation. Table 1 shows only some of the meanings. While both a) and d) are fairly straightforward lists, b) and hence c) can be infinitely variable in format. Some list the terms in alphabetical order, some in subject order, some in both, some even list the terms in three or four orders including permuted indexes, hierarchical listings and so on. Those in subject order can reassemble straightforward classification schemes, can involve elaborate symbology to show relationships between terms, or can lay terms out on charts called arrowgraphs or association maps. (For examples of different layouts, see figures 1, 3 and 4). Now in principle, a), b), c) and d) are quite distinct from each other. In real life, however, any technical thesaurus you happen to pick up is likely to have its own unique mix of the features in any of the four categories.The BSI ROOT Thesaurus, in being quite exceptional, is no exception to that rule. It was designed, not exactly for one particular database, but for any bibliographic database holding standards or technical regulations. This effectively made its subject area so broad that it could also be used for other people's databases covering a wide range of technologies. So it was never a type a) thesaurus; it started off as a b), grew to be a c) and now that we are putting it up on computer it will become a d) as well."So what?" you may say. Looking through the journal Terminologie recently, I chanced on the following paragraphs in an article by Prof. Helmut Felber (3):The meaning (concept) of a term is dependent on the system of concepts. The term keeps the particular meaning also within the subject-context, i.e. the meaning it has in the system of concepts.The thesaurus word is a word -mostly a term -or a name, which is used as a descriptor or non-descriptor for information retrieval.A descriptor is a thesaurus word, which is prescribed for use in the information system. For this purpose, a term or name is selected from the existing synonyms or quasi-synonyms. The meaning of this term is thus fixed for this information system, and may deviate from its general usage within a technical language. For this reason, a thesaurus cannot be used for technical translations.As I had just been given the job of addressing you translators on the BSI ROOT Thesaurus, I must confess a crease or two appeared on my forehead.One's first reaction is of course that Prof. Felber is absolutely right. A thesaurus designed for a specific in-house application can and should manipulate terms in unconventional ways, where this assists with effective retrieval of information.For example, one of the functions of such a thesaurus is to make sure that all users (indexers and searchers) use the same term for the same concepts. Thus there are instructions such as: Notice that while the first two examples show undeniable synonyms, the other three are different. Not all radiators are heaters, and not all heaters are radiators. In ROOT, for example, these two could emphatically not be considered synonyms because we need to distinguish the concepts. But conceivably the thesaurus of a vegetable-growers association could usefully lump all sorts of heaters together; after all, they so not have to cope with literature about car radiators. Similarly, many thesauri could justifiably use the term "Sports equipment" to cover everything from water wings to a billiard cue.In general, in-house thesauri [types a) and b)], in the name of effectiveness and efficiency, aim to cut down the number of "allowed" terms of descriptors, by controlling true synonyms and quasi-synonyms and by collecting together under one descriptor any narrower terms considered two specific for inclusion.But for type c) applications the matter changes. True synonyms must still be controlled, but more caution is needed with quasi-synonyms. Now that online bibliographic databases are springing up all over the place there is an increasing demand for thesauri which will help searchers to think of other words for the concept they have in mind. Some databases have their own thesauri (used by their indexers and available to searchers); others have to be accessed by free language terms, and it can be very difficult trying to think of the words someone else might have used to express the solution to a problem you had in mind. Hence the demand for the so-called "search thesaurus".The "search thesaurus" does not have to invoke quasi-synonyms or subsume specific terms under a broader heading. It is more like Roget's thesaurus in giving inspiration as to alternative terminology. As compared with a dictionary, for translation purposes it has the disadvantage of not showing a variety of definitions for a single term, but if it has a subject (or hierarchical) section then it has the advantage of showing whole arrays of related terms on one page.To sum up, when using a thesaurus the translator must be cautious, particularly about quasi-synonyms. But, with respect to Prof. Felber, caution should not prevent his taking advantage of the many excellent technical thesauri available today. (See also reference 4).Finally I must return to ROOT. ROOT does invoke quasi-synonyms from time to time (for example, see Fire alarms = Fire sirens = Smoke alarms in Figure 1 ), but avoids gross distortion of terminology. Whenever possible it follows definitions contained in British Standards. The Subject display (see Figure 1 , showing a small part of the schedule for Safety measures) goes to great pains to lay terms out in a way which
Appendix:
| null | null | null | null | {
"paperhash": [],
"title": [],
"abstract": [],
"authors": [],
"arxiv_id": [],
"s2_corpus_id": [],
"intents": [],
"isInfluential": []
} | null | 507 | 0 | null | null | null | null | null | null | null | null |
78812aa3facea5ef8c2c708be2f35c702278c8af | 237295784 | null | Session 8: Term Banks Today and Tomorrow. Chairman{'}s introduction | As some of you will know, the British Library Research and Development Department is the funding body for library and information research in the UK, currently with an annual research budget of around £1.4m. We inherited many of our functions from our predecessor the Office for Scientific and Technical Information (OSTI), but acquired some new ones with our change of name and our move to the British Library in 1974. The Department has never seen itself as a major contributor to research and development in the terminology field, but our programme now covers scientific and technical information and also information in the social sciences and the humanities. We support research on the information needs of the professional specialist, whilst not neglecting the information requirements of the ordinary citizen. The latter are covered in our programme of community information and public library research. | {
"name": [
"Baxter, Paul"
],
"affiliation": [
null
]
} | null | null | Proceedings of Translating and the Computer: Term banks for tomorrow{'}s world | 1982-11-01 | 0 | 0 | null | null | null | null | null | The wide range of projects supported by OSTI and the British Library R & D Department over the past 17 years has included experimentation with new information services, assessments of the potential of new technologies, investigations of primary journal publishing, and developments in indexing, classification and cataloguing. The fields of language and terminology have always been somewhat peripheral aspects of our programme, although indexing and thesaurus construction overlap with these fields. Terminology is not an area in which librarians and information specialists in the UK have shown a great interest. Perhaps they regard the language in which concepts are expressed as something outside their responsibility, much as they regard the content of books and documents as a subject over which they would not wish to exercise any control.Although terminological research may be of only limited interest to the library community, term banks could offer considerable benefits to librarians and information specialists, amongst many other types of users. Indeed one of the obstacles to the establishment of a UK bank may be the sheer diversity of its possible uses, an aspect which other speakers have touched on. In the public sector it is not clear where responsibility for its creation should lie. The high capital cost of setting it up, and the likelihood that a fair amount of subsidy would be required, at least in its early years, have acted as additional disincentives, both to the public and private sectors.There is also a "chicken and egg" problem in that until users have access to such a service, they do not appreciate fully the extent of its usefulness.In 1980 the Department made a contribution to breaking this circle by supporting a one year feasibility study by Professor Sager's group at the University of Manchester Institute of Science and Technology. We also supported a study visit overseas enabling Professor Sager's research worker, John McNaught, to visit a number of European term banks and report on their organisation and services. Most of you will probably have seen the extensive documentation resulting from these studies 1The Department's involvement did not end with providing the money for this work (which, incidentally, amounted to around £11,000). We try to ensure that the results of projects are effectively disseminated not only through reports but also through journal articles, seminars and workshops. Our main effort in this case was directed into the organisation of a small meeting in December 1981 bringing together interested people both from Government departments and from the private sector. At the meeting, Professor Sager presented a paper outlining two approaches to establishing a term bank in this country. The first approach was the "top down" route, involving joint funding, development and exploitation by a combination of public and private sector bodies. The other was a more cautious approach whereby existing terminology collections would be coordinated across a variety of organisations. Since then I understand there has been some progress on the second approach but rather less on the first.British Library R & D Department can do little more to help either approach. Our funds are for research only and cannot be used to support the building of a database. Having made our initial contribution, we hope that other organisations will now come forward with funds for development work. Of course further fundamental research may also be required and it is likely that the Department will be seen as a possible source of funds for this type of work in view of our past contributions. In fairness, however, it should be pointed out that the Department's funds are at present heavily committed to research more directly linked to the operational problems of library and information services. Naturally research of this kind must be our main concern and this has been underlined in the priorities set for the Department by its Advisory Committee.I do not want to end this introduction on an entirely pessimistic note so I will conclude by expressing the hope that any conclusions and recommendations of this conference are widely disseminated to policy makers and funding agencies, including the British Library! The high attendance, and the enthusiasm apparent in the discussions, can only bode well for the future. | Main paper:
:
The wide range of projects supported by OSTI and the British Library R & D Department over the past 17 years has included experimentation with new information services, assessments of the potential of new technologies, investigations of primary journal publishing, and developments in indexing, classification and cataloguing. The fields of language and terminology have always been somewhat peripheral aspects of our programme, although indexing and thesaurus construction overlap with these fields. Terminology is not an area in which librarians and information specialists in the UK have shown a great interest. Perhaps they regard the language in which concepts are expressed as something outside their responsibility, much as they regard the content of books and documents as a subject over which they would not wish to exercise any control.Although terminological research may be of only limited interest to the library community, term banks could offer considerable benefits to librarians and information specialists, amongst many other types of users. Indeed one of the obstacles to the establishment of a UK bank may be the sheer diversity of its possible uses, an aspect which other speakers have touched on. In the public sector it is not clear where responsibility for its creation should lie. The high capital cost of setting it up, and the likelihood that a fair amount of subsidy would be required, at least in its early years, have acted as additional disincentives, both to the public and private sectors.There is also a "chicken and egg" problem in that until users have access to such a service, they do not appreciate fully the extent of its usefulness.In 1980 the Department made a contribution to breaking this circle by supporting a one year feasibility study by Professor Sager's group at the University of Manchester Institute of Science and Technology. We also supported a study visit overseas enabling Professor Sager's research worker, John McNaught, to visit a number of European term banks and report on their organisation and services. Most of you will probably have seen the extensive documentation resulting from these studies 1The Department's involvement did not end with providing the money for this work (which, incidentally, amounted to around £11,000). We try to ensure that the results of projects are effectively disseminated not only through reports but also through journal articles, seminars and workshops. Our main effort in this case was directed into the organisation of a small meeting in December 1981 bringing together interested people both from Government departments and from the private sector. At the meeting, Professor Sager presented a paper outlining two approaches to establishing a term bank in this country. The first approach was the "top down" route, involving joint funding, development and exploitation by a combination of public and private sector bodies. The other was a more cautious approach whereby existing terminology collections would be coordinated across a variety of organisations. Since then I understand there has been some progress on the second approach but rather less on the first.British Library R & D Department can do little more to help either approach. Our funds are for research only and cannot be used to support the building of a database. Having made our initial contribution, we hope that other organisations will now come forward with funds for development work. Of course further fundamental research may also be required and it is likely that the Department will be seen as a possible source of funds for this type of work in view of our past contributions. In fairness, however, it should be pointed out that the Department's funds are at present heavily committed to research more directly linked to the operational problems of library and information services. Naturally research of this kind must be our main concern and this has been underlined in the priorities set for the Department by its Advisory Committee.I do not want to end this introduction on an entirely pessimistic note so I will conclude by expressing the hope that any conclusions and recommendations of this conference are widely disseminated to policy makers and funding agencies, including the British Library! The high attendance, and the enthusiasm apparent in the discussions, can only bode well for the future.
Appendix:
| null | null | null | null | {
"paperhash": [],
"title": [],
"abstract": [],
"authors": [],
"arxiv_id": [],
"s2_corpus_id": [],
"intents": [],
"isInfluential": []
} | null | 507 | 0 | null | null | null | null | null | null | null | null |
c689beae95b54892f1bb54a441684451387c24b6 | 237295805 | null | Session 8: Term Banks Today and Tomorrow. Summary of discussion | The following points were made during the discussion: A speaker was asked if Eurodicautom had an interactive facility for recording enquiries and entering them in the bank. The reply was 'not yet', but it is intended to introduce one. Delegates were told that the Bundessprachenamt term bank has a few private users -two firms and some ministries. These do not have their own terminals, but input their own records and receive printouts. In reply to a question we heard that none of the speakers had studied the cost of creating a terminology bank. | {
"name": [
"Gilbert, Valerie"
],
"affiliation": [
null
]
} | null | null | Proceedings of Translating and the Computer: Term banks for tomorrow{'}s world | 1982-11-01 | 0 | 0 | null | null | null | null | null | null | Main paper:
Appendix:
| null | null | null | null | {
"paperhash": [],
"title": [],
"abstract": [],
"authors": [],
"arxiv_id": [],
"s2_corpus_id": [],
"intents": [],
"isInfluential": []
} | null | 507 | 0 | null | null | null | null | null | null | null | null |
0f0090438c1257443e0e2f521010443ea9f13f4b | 14038010 | null | {TEAM}: A Transportable Natural-Language Interface System | A major benefit of using natural language to access the information in a database is that it shifts onto the system the burden of mediating between two views of the data: the way in which the data is stored (the "database view"), and the way in which an end-user thinks about it (the "user*s view"). Database information is recorded in terms of files, records, and fields, while natural-language expressions refer to the same information in terms of entities and relationships in the world. A major problem in constructing a natural-language interface is determining how to encode and use the information needed to bridge these two views. Current natural-language interface systems require extensive efforts by specialists in natural-language processing to provide them with the information they need to do the bridging. The systems are, in effect, handtallored to provide access to particular databases. | {
"name": [
"Grosz, Barbara J."
],
"affiliation": [
null
]
} | null | null | First Conference on Applied Natural Language Processing | 1983-02-01 | 10 | 109 | null | the information in a database is that it shifts onto the system the burden of mediating between two views of the data: the way in which the data is stored (the "database view"), and the way in which an end-user thinks about it (the "user*s view").Database information is recorded in terms of files, records, and fields, while natural-language expressions refer to the same information in terms of entities and relationships in the world. A major problem in constructing a natural-language interface is determining how to encode and use the information needed to bridge these two views. Current natural-language interface systems require extensive efforts by specialists in natural-language processing to provide them with the information they need to do the bridging. The systems are, in effect, handtallored to provide access to particular databases.focuses on the problem of constructing transportable natural-language interfaces,i.e., systems that can be adapted to provide access to databases for which they were not specifically handtailored.It describes an initial version of a transportable system, called TEAM (for ~ransportable E_ngllsh A_ccess Data manager).The hypothesis underlying the research described in this paper is that the information required for the adaptation can be obtained through an Lnteractlve dialogue with database management personnel who are not familiar with natural-language processing techniques.Issues of Transportability The insistence on transportability distinguishes TEAM from previous systems such as LADDER [Hendrlx ec al., [978] LUNAR [Woods, Kaplan, and Webber, 1972] , PLANES [Waltz, 1975] , REL [Thompson, [975] , and has affected ~he design of the natural-language processln~ system in several ways.Most The decision to provide transportability to existing conventional databases (which distinguishes TEAM from CHAT [Warren, 1981] ) means that the database cannot be restructured to make the way in which it stores data more compatible with the way in which a user may ask about the data.Although many problems can be avoided if one is allowed to design the database as well as the natural-language system, given the prevalence of existing conventional databases, approaches which make this assumption are likely to have limited applicability in the near-term.The TEAM system has three major components: (1) an acquZsttion component,the DIALOGIC language system [Grosz, et al., 1982] , and (3) a data-access ccaponent.Section C descrlbes how the language and data-access components were designed to accommodate the needs of transportability.Sectioo D describes the design of the acquisition component to allow flexible interaction ~rlth a database expert and discusses acquisition problems caused by the differences between the database view and user view. Section E shows how end-user queries are interpreted after an acquisition has been completed.Section F describes the current state of development of TEAM and lists several problems currently under investigation.In TEAM, the translation of an English query into a database query takes place in two steps. First, the DIALOGIC system constructs a representation of the literal meaning or "logical form" of the query [Moore, 1981] . Second, the data-access component translates the logical form into a formal database query.Each of these steps requires a combination of some information that is dependent on the domain or the database wlth some information that is not. To provide for transportability, the TEAM system carefully separates these two kinds of information.To adapt TEAM to a new database three kinds of information must be acquired: information about words, about concepts, and about the structure of the database.The data structures that encode this information--and the language processing and data-access procedures that use them--are designed to allow for acquiring new information automatically.Information about words, lexlcal information, includes the syntactic properties of the words that will be used in querying the database and semantic information about the kind of concept to which a particular word refers. TEAM records the lexlcal information specific to a given domain in a lexicon. A database schema encodes information about how concepts in the conceptual schena map onto the structures of a particular database.In particular, it links conceptual-schema representations of entities and relationships in the domain to their realization in a particular database.TEAM currently assumes a relational database with a number of files.(No languageprocesslng-related problems are entailed in moving TEAM to other database models.)Each file is about some kind of object (e.g., employees, students, ships, processor chips); the fields of the file record properties of the object (e.g., department, age, length). | null | null | The language executive [Grosz, etal., 1982; Walker, 1978|, DIALOGIC, coordinates syntactic, semantic, and basic pragmatic rules in translating an English query into logical form. DIALOGIC's syntactic rules provide a general grammar of English [Robinson, 1982] . A semantic "translation" rule associated with each syntactic phrase rule specifies how the constituents of the phrase are to be interpreted.Basic pragmatic functions take local context into account in providing the interpretation of such things as noun-noun combinations.DIALOGIC also includes a quantlfler-scoping algorithm.To provide access to the informa=,on in a particular database, each of the components of DIALOG~C must access domain-speciflc information about the words and concepts relevant to that database.The information required by the syntactic rules is found in the lexicon. Information required by the semantic and pragmatic rules is found in the lexicon or the conceptual schema.The rules themselves however do not include such domain-dependent information and therefore do not need to be changed for different databases.In a similar manner, the data-access component separates general rules for translating logical forms into database queries from information about a particular database.The rules access information in the conceptual and database schemata to interpret queries for a particular database.TEAM is designed to interact with two kinds of users: a database expert (DBE) and an end-user. The DBE provides information about the files and fields in the database through a system-dlrected acquisition dialogue.As a result of this dlaloEue, the language-processlng and data-access components are extended so that the end-user may query the new database in natural-language.Because the DBE is assumed to be familiar with database structures, but not with language-processlng techniques, the acquisition dialogue is oriented around database structures. That is, the questions are about the kinds of things in the files and fields of the database, rather than about lexlcal entries, sort hierarchies, and predicates.The disparity between the database view of the data and the end-user's view make the acquisition process nontrlvlal. For instance, consider a database of information about students in a university.From the perspective of an enduser "sophomore" refers to a subset of all of the students, those who are in their second year at the university.The fact that a particular student is a sophomore might be recorded in the database in a number of ways, including: (l) in a separate file containing information about the sophomore students;(2) by a special value in a symbolic field (e.g., a CLASS field [n which the value SOPH indicates "sophomore"); (3) by a "true" value in a Boolean field (e.g., a * in an [S-$O?H field).natural-language querying to be useful, the end-user must be protected from having to know which type of representation was chosen. The questions posed to the DBE for each kind of database construct must be sufficient to allow DIALOGIC to handle approximately the same range of linguistic expressions (e.g., for referring to "students in the sophomore class') regardless of the particular database implementation chosen. In all cases, TEAM will create a lexical entry for "sophomore" and an entry in the conceptual schema to represent the concept of sophomores. The database attachment for thls concept will depend on the particular database structure, as will the kinds of predicates for which it can be an argument.In designing TEAM we found it important to distinguish three differanc kinds of fields N arlthmeCic, feature (Boolean), and symbollc--on the basis of the range of linguistic expressions to which each gives rise.AriChmetic fields contain numeric values on which comparisons and computations llke averaging are likely to be done. (Fields containing dates are not yet handled by TEAM.)Feature fields contain true/false values which record whether or not some attribute is a property of the object described by the file. Symbolic fields typically contain values that correspond to nouns or adjectives that denote the subtypes of the domain denoted by the field. Different acquisition questions are asked for each type of field.These are illustrated in the example in Section D.3.The ~aJor features of the strategy developed for acquiring information about a database from a DBE include: (1) providiu E multiple levels of detail for each question posed to the DBE; (2) allowing a DBE to review previous answers and change them; and (3) checking for legal answers.At present, TEAM initially presents the DBE wlth the short-form of a quesclou. A more detailed version ("long-form') of the question, including examples illustratlng different kinds of responses, can be requested by the DBE. An obvious excenslon to this strategy would be to present different Inltial levels to different users (depending, for example, on their previous experience wlth the system).is immediately integrated into the underlying knowledge structures of the program. 8owever, we also wanted Co allow the DSE to change answers to previous questions (this has turned out to be an essential feature of TEAM). Some questions (e.g., those about irregular plural forms and synonyms) affect only a single part of TEAM (the lexicon).Other questions (e.g., those about feature fields) affect all components of the system.Because of the complex interaction between acquisition questions and components of the system to be updated, immediate integration of new information is not possible.As a result, updating of the lexicon, conceptual schema, and database schema Is not done until an acqulsition dialogue is completed.To illustrate the acquisition of information, consider a database, called CHIP, containing information about processor chips. In particular, the fields in this database contain the following information: the identification number of a chip (ID), its manufacturer (MAKER) its width in bits (WIDTH), ice speed in megahertz (SPEED), its cost in dollars (PRICE), the kind of technology (FAMILY), and a flag indicating wheCher or noc there is an export license for the chip (EXP).In the figures discussed below, the DBE's response is indicated in uppercase. For many quesClone the DBE is presented wlch a llst of options from which ha can choose.For these questions, the complete llst is shown and the answer indicated in boldface. Responses to the remaining quesCloms allow TEAM to identify the kind of object the file contains information about (2), types of linguistic expressions used to refer to It [ (6) and 7], how to identify individual objects in the database (4), and how to specify individual objects to the user (5). These responses result in the words "chip" and "processor" being added to the lexicon, a new sort added to the taxonomy (providing the interpretation for these words), and a link made in the database schema between this sort and records in the file CHIP. Figure 2 gives the short-form of the most central questions asked about symbolic fields, using the field MAKER (chip manufacturers) as exemplar.These questions are used to determine the kinds of properties represented, how these relate to properties in other fields, and the kinds of linguistic expressions the field values can give rise to. Question (4) allows TEAM to determine that individual field values refer to manufacturers rather than chips. The long-form of Question (7) is:Will you want to ask, for example, "How many MOTOROLA processors are there?" to get a count of the number of PROCESSORS with CHIP-MAKER-MOTOROLA? Question (8) expands to:Will you want to ask, for example, "How many HOTOROLAS are there?" to get a count of the number of PROCESSORS with CHIP-MAKER-MOTOROLA?In this ease, the answer to question (7) Is "yes" and to question (8) "no"; the field has values that can be used as explicit, but not implicit, classifiers.Contrast this wlth a symbolic field in a file about students that contains the class of a student; in this case the answer to both auesclons would be affirmative because, for example, the phrases "sophomore woman" and "sophomores" can be used to refer to refer to STUDENTS with CLASS=SOPHOMORE.In other cases, the values may serve neither as explicit nor as implicit classifiers.For example, one cannot say *"the shoe employees" or *"the shoes" to mean "employees in the SHOE department".both questions 7and (8) a positive answer is the default.It is important to allow the user to override thls default, because TEAM must be able to avoid spurious ambiguities (e.g., where two fields have identical field values, but where the values can be classifiers for only one field.).acquisition of this field, lexical entries are made for "maker" and any synonyms supplied by the user. Again a new son is created. It is marked as having values that can be explicit, but not implicit, classifiers. Later, when the actual connection to the database is made, individual field values (e.g., "Motorola") will be made individual instances of this new sort. Figure ( 3) presents the questions asked about arithmetic fields, using the PRICE field as exemplar.Because dates, measures, and count quantities are all handled differently, TEAM must first determine which kind of arithmetic object is in the field (2).In this case we have a unit of "worth" (6) measured in "dollars" (4). Questions (8) and (9) supply information needed for interpreting expressions Involvlng comparatives (e.g., "What chips are more expensive than the Z8080?") and superlatives (e--~7, "What is the cheapest chip?"). Figure 4 gives the expanded version of these questions.As a result of thls acquisition, a new subsort of the (measure) sort WORTH is added to the taxonomy for PRICE, and is noted as measured in dollars.In addition, lexlcal entries are created for adjectives indicating positive ("expensive") and negative ("cheap") degrees of price and are linked to a binary predicate that relates a chip to its price.Feature fields are the most difficult fields to handle. They represent a single (arbitrary) property of an entity, with values that indicate whether or not the entity has the property, and they give rise to a wide range of linguistic expresslons--adJectlvals, nouns, phrases.The short-form of the questions asked about feature fields are given in Figure 5 , using the field EXP; the value YES indicates there is an export license for a given processor, and NO indicates there is not. Figures 6, 7, and 8 give the expanded form of questions (4), (6), and (B) respectively.The expanded form illustrates the kinds of end-user queries that TEAM can handle after the DBE has answered these questions (see also Figure 9 ). Providing thls kind of illustration has turned out to be essential for getting these questions answered correctly.Each of these types of expression leads to new lexlcal, conceptual schema, and database schema entries.In general in the conceptual schema, feature field adJectlvals and abstract nouns result in the creation of new predicates (see Section E for an example); count nouns result in the creation of new subsorts of the file subject sort. The database schema contains informatlon about which field to access and what field value is required.TEAM also includes a limlted capability for acqulrln8 verbs.At present, only transitive verbs can be acquired.One of the arguments to the predicate cozTespondlng to a verb must be of the same sort as the file subject.The other argument must correspond to the sort of one of the fields.For the CHIP database, the DBE could specify that the verb "make" (and/or "manufacture") takes a CHIP as one argument and a MAKER as the second argument.After the DBE has completed an acquisition session for a file, TEAM can interpret and respond Co end-user queries. Figure 9 lists some sample end-user queries for the file illustrated in the previous section.The role of the different kinds of informatlon acquired above can be seen by considering the logical forms produced for several queries and the database attachments for the sorts and predicates that appear in them. The following examples illustrate the information acquired for the three different fields described in the preceding section.What are the Motorola chips? DIALOGIC produces the following logical form:(Query (WHAT tl (THING tl) (THE p2 (AND (PROCESSOR p2) (MAKER-OF p2 MOTOROLA)) (EQ p2 tl))))where WHAT and THE are quantifiers; 1 tl and p2 are variables;AND and EQ have their usual interpretation.The predicates PROCESSOR and MAKER-OF and the constant MOTOROLA were created as a result of acquisition. In general this is any word wwww such that you might want to ask a question of the form: Which PROCESSORS hove wwww? | null | Main paper:
domain-lndependent information:
The language executive [Grosz, etal., 1982; Walker, 1978|, DIALOGIC, coordinates syntactic, semantic, and basic pragmatic rules in translating an English query into logical form. DIALOGIC's syntactic rules provide a general grammar of English [Robinson, 1982] . A semantic "translation" rule associated with each syntactic phrase rule specifies how the constituents of the phrase are to be interpreted.Basic pragmatic functions take local context into account in providing the interpretation of such things as noun-noun combinations.DIALOGIC also includes a quantlfler-scoping algorithm.To provide access to the informa=,on in a particular database, each of the components of DIALOG~C must access domain-speciflc information about the words and concepts relevant to that database.The information required by the syntactic rules is found in the lexicon. Information required by the semantic and pragmatic rules is found in the lexicon or the conceptual schema.The rules themselves however do not include such domain-dependent information and therefore do not need to be changed for different databases.In a similar manner, the data-access component separates general rules for translating logical forms into database queries from information about a particular database.The rules access information in the conceptual and database schemata to interpret queries for a particular database.TEAM is designed to interact with two kinds of users: a database expert (DBE) and an end-user. The DBE provides information about the files and fields in the database through a system-dlrected acquisition dialogue.As a result of this dlaloEue, the language-processlng and data-access components are extended so that the end-user may query the new database in natural-language.Because the DBE is assumed to be familiar with database structures, but not with language-processlng techniques, the acquisition dialogue is oriented around database structures. That is, the questions are about the kinds of things in the files and fields of the database, rather than about lexlcal entries, sort hierarchies, and predicates.The disparity between the database view of the data and the end-user's view make the acquisition process nontrlvlal. For instance, consider a database of information about students in a university.From the perspective of an enduser "sophomore" refers to a subset of all of the students, those who are in their second year at the university.The fact that a particular student is a sophomore might be recorded in the database in a number of ways, including: (l) in a separate file containing information about the sophomore students;(2) by a special value in a symbolic field (e.g., a CLASS field [n which the value SOPH indicates "sophomore"); (3) by a "true" value in a Boolean field (e.g., a * in an [S-$O?H field).natural-language querying to be useful, the end-user must be protected from having to know which type of representation was chosen. The questions posed to the DBE for each kind of database construct must be sufficient to allow DIALOGIC to handle approximately the same range of linguistic expressions (e.g., for referring to "students in the sophomore class') regardless of the particular database implementation chosen. In all cases, TEAM will create a lexical entry for "sophomore" and an entry in the conceptual schema to represent the concept of sophomores. The database attachment for thls concept will depend on the particular database structure, as will the kinds of predicates for which it can be an argument.In designing TEAM we found it important to distinguish three differanc kinds of fields N arlthmeCic, feature (Boolean), and symbollc--on the basis of the range of linguistic expressions to which each gives rise.AriChmetic fields contain numeric values on which comparisons and computations llke averaging are likely to be done. (Fields containing dates are not yet handled by TEAM.)Feature fields contain true/false values which record whether or not some attribute is a property of the object described by the file. Symbolic fields typically contain values that correspond to nouns or adjectives that denote the subtypes of the domain denoted by the field. Different acquisition questions are asked for each type of field.These are illustrated in the example in Section D.3.The ~aJor features of the strategy developed for acquiring information about a database from a DBE include: (1) providiu E multiple levels of detail for each question posed to the DBE; (2) allowing a DBE to review previous answers and change them; and (3) checking for legal answers.At present, TEAM initially presents the DBE wlth the short-form of a quesclou. A more detailed version ("long-form') of the question, including examples illustratlng different kinds of responses, can be requested by the DBE. An obvious excenslon to this strategy would be to present different Inltial levels to different users (depending, for example, on their previous experience wlth the system).is immediately integrated into the underlying knowledge structures of the program. 8owever, we also wanted Co allow the DSE to change answers to previous questions (this has turned out to be an essential feature of TEAM). Some questions (e.g., those about irregular plural forms and synonyms) affect only a single part of TEAM (the lexicon).Other questions (e.g., those about feature fields) affect all components of the system.Because of the complex interaction between acquisition questions and components of the system to be updated, immediate integration of new information is not possible.As a result, updating of the lexicon, conceptual schema, and database schema Is not done until an acqulsition dialogue is completed.
example of acquisition queeclons:
To illustrate the acquisition of information, consider a database, called CHIP, containing information about processor chips. In particular, the fields in this database contain the following information: the identification number of a chip (ID), its manufacturer (MAKER) its width in bits (WIDTH), ice speed in megahertz (SPEED), its cost in dollars (PRICE), the kind of technology (FAMILY), and a flag indicating wheCher or noc there is an export license for the chip (EXP).In the figures discussed below, the DBE's response is indicated in uppercase. For many quesClone the DBE is presented wlch a llst of options from which ha can choose.For these questions, the complete llst is shown and the answer indicated in boldface. Responses to the remaining quesCloms allow TEAM to identify the kind of object the file contains information about (2), types of linguistic expressions used to refer to It [ (6) and 7], how to identify individual objects in the database (4), and how to specify individual objects to the user (5). These responses result in the words "chip" and "processor" being added to the lexicon, a new sort added to the taxonomy (providing the interpretation for these words), and a link made in the database schema between this sort and records in the file CHIP. Figure 2 gives the short-form of the most central questions asked about symbolic fields, using the field MAKER (chip manufacturers) as exemplar.These questions are used to determine the kinds of properties represented, how these relate to properties in other fields, and the kinds of linguistic expressions the field values can give rise to. Question (4) allows TEAM to determine that individual field values refer to manufacturers rather than chips. The long-form of Question (7) is:Will you want to ask, for example, "How many MOTOROLA processors are there?" to get a count of the number of PROCESSORS with CHIP-MAKER-MOTOROLA? Question (8) expands to:Will you want to ask, for example, "How many HOTOROLAS are there?" to get a count of the number of PROCESSORS with CHIP-MAKER-MOTOROLA?In this ease, the answer to question (7) Is "yes" and to question (8) "no"; the field has values that can be used as explicit, but not implicit, classifiers.Contrast this wlth a symbolic field in a file about students that contains the class of a student; in this case the answer to both auesclons would be affirmative because, for example, the phrases "sophomore woman" and "sophomores" can be used to refer to refer to STUDENTS with CLASS=SOPHOMORE.In other cases, the values may serve neither as explicit nor as implicit classifiers.For example, one cannot say *"the shoe employees" or *"the shoes" to mean "employees in the SHOE department".both questions 7and (8) a positive answer is the default.It is important to allow the user to override thls default, because TEAM must be able to avoid spurious ambiguities (e.g., where two fields have identical field values, but where the values can be classifiers for only one field.).acquisition of this field, lexical entries are made for "maker" and any synonyms supplied by the user. Again a new son is created. It is marked as having values that can be explicit, but not implicit, classifiers. Later, when the actual connection to the database is made, individual field values (e.g., "Motorola") will be made individual instances of this new sort. Figure ( 3) presents the questions asked about arithmetic fields, using the PRICE field as exemplar.Because dates, measures, and count quantities are all handled differently, TEAM must first determine which kind of arithmetic object is in the field (2).In this case we have a unit of "worth" (6) measured in "dollars" (4). Questions (8) and (9) supply information needed for interpreting expressions Involvlng comparatives (e.g., "What chips are more expensive than the Z8080?") and superlatives (e--~7, "What is the cheapest chip?"). Figure 4 gives the expanded version of these questions.As a result of thls acquisition, a new subsort of the (measure) sort WORTH is added to the taxonomy for PRICE, and is noted as measured in dollars.In addition, lexlcal entries are created for adjectives indicating positive ("expensive") and negative ("cheap") degrees of price and are linked to a binary predicate that relates a chip to its price.Feature fields are the most difficult fields to handle. They represent a single (arbitrary) property of an entity, with values that indicate whether or not the entity has the property, and they give rise to a wide range of linguistic expresslons--adJectlvals, nouns, phrases.The short-form of the questions asked about feature fields are given in Figure 5 , using the field EXP; the value YES indicates there is an export license for a given processor, and NO indicates there is not. Figures 6, 7, and 8 give the expanded form of questions (4), (6), and (B) respectively.The expanded form illustrates the kinds of end-user queries that TEAM can handle after the DBE has answered these questions (see also Figure 9 ). Providing thls kind of illustration has turned out to be essential for getting these questions answered correctly.Each of these types of expression leads to new lexlcal, conceptual schema, and database schema entries.In general in the conceptual schema, feature field adJectlvals and abstract nouns result in the creation of new predicates (see Section E for an example); count nouns result in the creation of new subsorts of the file subject sort. The database schema contains informatlon about which field to access and what field value is required.TEAM also includes a limlted capability for acqulrln8 verbs.At present, only transitive verbs can be acquired.One of the arguments to the predicate cozTespondlng to a verb must be of the same sort as the file subject.The other argument must correspond to the sort of one of the fields.For the CHIP database, the DBE could specify that the verb "make" (and/or "manufacture") takes a CHIP as one argument and a MAKER as the second argument.After the DBE has completed an acquisition session for a file, TEAM can interpret and respond Co end-user queries. Figure 9 lists some sample end-user queries for the file illustrated in the previous section.The role of the different kinds of informatlon acquired above can be seen by considering the logical forms produced for several queries and the database attachments for the sorts and predicates that appear in them. The following examples illustrate the information acquired for the three different fields described in the preceding section.What are the Motorola chips? DIALOGIC produces the following logical form:(Query (WHAT tl (THING tl) (THE p2 (AND (PROCESSOR p2) (MAKER-OF p2 MOTOROLA)) (EQ p2 tl))))where WHAT and THE are quantifiers; 1 tl and p2 are variables;AND and EQ have their usual interpretation.The predicates PROCESSOR and MAKER-OF and the constant MOTOROLA were created as a result of acquisition. In general this is any word wwww such that you might want to ask a question of the form: Which PROCESSORS hove wwww?
:
the information in a database is that it shifts onto the system the burden of mediating between two views of the data: the way in which the data is stored (the "database view"), and the way in which an end-user thinks about it (the "user*s view").Database information is recorded in terms of files, records, and fields, while natural-language expressions refer to the same information in terms of entities and relationships in the world. A major problem in constructing a natural-language interface is determining how to encode and use the information needed to bridge these two views. Current natural-language interface systems require extensive efforts by specialists in natural-language processing to provide them with the information they need to do the bridging. The systems are, in effect, handtallored to provide access to particular databases.focuses on the problem of constructing transportable natural-language interfaces,i.e., systems that can be adapted to provide access to databases for which they were not specifically handtailored.It describes an initial version of a transportable system, called TEAM (for ~ransportable E_ngllsh A_ccess Data manager).The hypothesis underlying the research described in this paper is that the information required for the adaptation can be obtained through an Lnteractlve dialogue with database management personnel who are not familiar with natural-language processing techniques.Issues of Transportability The insistence on transportability distinguishes TEAM from previous systems such as LADDER [Hendrlx ec al., [978] LUNAR [Woods, Kaplan, and Webber, 1972] , PLANES [Waltz, 1975] , REL [Thompson, [975] , and has affected ~he design of the natural-language processln~ system in several ways.Most The decision to provide transportability to existing conventional databases (which distinguishes TEAM from CHAT [Warren, 1981] ) means that the database cannot be restructured to make the way in which it stores data more compatible with the way in which a user may ask about the data.Although many problems can be avoided if one is allowed to design the database as well as the natural-language system, given the prevalence of existing conventional databases, approaches which make this assumption are likely to have limited applicability in the near-term.The TEAM system has three major components: (1) an acquZsttion component,the DIALOGIC language system [Grosz, et al., 1982] , and (3) a data-access ccaponent.Section C descrlbes how the language and data-access components were designed to accommodate the needs of transportability.Sectioo D describes the design of the acquisition component to allow flexible interaction ~rlth a database expert and discusses acquisition problems caused by the differences between the database view and user view. Section E shows how end-user queries are interpreted after an acquisition has been completed.Section F describes the current state of development of TEAM and lists several problems currently under investigation.In TEAM, the translation of an English query into a database query takes place in two steps. First, the DIALOGIC system constructs a representation of the literal meaning or "logical form" of the query [Moore, 1981] . Second, the data-access component translates the logical form into a formal database query.Each of these steps requires a combination of some information that is dependent on the domain or the database wlth some information that is not. To provide for transportability, the TEAM system carefully separates these two kinds of information.To adapt TEAM to a new database three kinds of information must be acquired: information about words, about concepts, and about the structure of the database.The data structures that encode this information--and the language processing and data-access procedures that use them--are designed to allow for acquiring new information automatically.Information about words, lexlcal information, includes the syntactic properties of the words that will be used in querying the database and semantic information about the kind of concept to which a particular word refers. TEAM records the lexlcal information specific to a given domain in a lexicon. A database schema encodes information about how concepts in the conceptual schena map onto the structures of a particular database.In particular, it links conceptual-schema representations of entities and relationships in the domain to their realization in a particular database.TEAM currently assumes a relational database with a number of files.(No languageprocesslng-related problems are entailed in moving TEAM to other database models.)Each file is about some kind of object (e.g., employees, students, ships, processor chips); the fields of the file record properties of the object (e.g., department, age, length).
Appendix:
| null | null | null | null | {
"paperhash": [
"robinson|diagram:_a_grammar_for_dialogues",
"grosz|dialogic:_a_core_natural-language_processing_system",
"warren|efficient_processing_of_interactive_relational_data_base_queries_expressed_in_logic",
"moore|problems_in_logical_form",
"waltz|natural_language_access_to_a_large_data_base:_an_engineering_approach",
"waltz|natural_language_access_to_a_large_data_base:_an_engineering_approach"
],
"title": [
"DIAGRAM: a grammar for dialogues",
"DIALOGIC: A Core Natural-Language Processing System",
"Efficient Processing of Interactive Relational Data Base Queries expressed in Logic",
"Problems in Logical Form",
"Natural language access to a large data base: an engineering approach",
"Natural Language Access To A Large Data Base: An Engineering Approach"
],
"abstract": [
"An explanatory overview is given of DIAGRAM, a large and complex grammar used in an artificial intelligence system for interpreting English dialogue. DIAGRAM is an augmented phrase-structure grammar with rule procedures that allow phrases to inherit attributes from their constituents and to acquire attributes from the larger phrases in which they themselves are constituents. These attributes are used to set context-sensitive constraints on the acceptance of an analysis. Constraints can be imposed by conditions on dominance as well as by conditions on constituency. Rule procedures can also assign scores to an analysis to rate it as probable or unlikely. Less likely analyses can be ignored by the procedures that interpret the utterance. For every expression it analyzes, DIAGRAM provides an annotated description of the structure. The annotations supply important information for other parts of the system that interpret the expression in the context of a dialogue.\nMajor design decisions are explained and illustrated. Some contrasts with transformational grammars are pointed out and problems that motivate a plan to use metarules in the future are discussed. (Metarules derive new rules from a set of base rules to achieve the kind of generality previously captured by transformational grammars but without having to perform transformations on syntactic analyses.)",
"The DIALOGIC system translates English sentences into representations of their literal meaning in the context of an utterance. These representations, or \"logical forms,\" are intended to be a purely formal language that is as close as possible to the structure of natural language, while providing the semantic compositionality necessary for meaning-dependent computational processing. The design of DIALOGIC (and of its constituent modules) was influenced by the goal of using it as the core language-processing component in a variety of systems, some of which are transportable to new domains of application.",
"Relational database retrieval is viewed as a special case of deduction in logic. It is argued that expressing a query in logic clarifies the problems involved in processing it efficiently (\"query optimisationn). The paper describes a simple but effective strategy for planning a query so that it can be efficiently executed by the elementary deductive mechanism provided in the programming language Prolog. This planning algorithm has been implemented as part of a natural language question answering system, called Chat-80. The Chat-80 method of query planning and execution is compared with the strategies used in other relational database systems, particularly Ingres and System R.",
"Abstract : Most current theories of natural-language processing propose that the assimilation of an utterance involves producing an expression or structure that in some sense represents the literal meaning of the utterance. It is often maintained that understanding what an utterance literally means consists in being able to recover such a representation. In philosophy and linguistics this sort of representation is usually said to display the \"logical form\" of an utterance. This paper surveys some of the key problems that arise in defining a system of representation for the logical forms of English sentences and suggests possible approaches to their solution. The author first looks at some general issues relating to the notion of logical form, explaining why it makes sense to define such a notion only for sentences in context, not in isolation, and then discusses the relationship between research on logical form and work on knowledge representation in artificial intelligence. The rest of the paper is devoted to examining specific problems in logical form. These include the following: quantifiers; events, actions and processes; time and space; collective entities and substances; propositional attitudes and modalities; and questions and imperatives.",
"An intelligent program which accepts natural language queries can allow anon-technical user to easily obtain information from a large non-uniform data base. This paper discusses the design of a program which will tolerate a wide variety of requests including ones with pronouns and referential phrases. The system embodies a certain amount of common sense, so that for example, it \"knows when it does or does not understand a particular request and it can bypass actual data base search in answering unreasonable requests. The system is conceptually simple and could be easily adapted to other data bases.",
"A n i n t e l l i g e n t program which accepts n a t u r a l language q u e r i e s can a l l o w a n o n t e c h n i c a l user to e a s i l y o b t a i n i n f o r m a t i o n from a l a r g e non-uniform d a t a base. T h i s paper discusses the d e s i g n of a program which w i l l t o l e r a t e a wide v a r i e t y o f r e quests i n c l u d i n g ones w i t h pronouns and r e f e r e n t i a l phrases. The system embodies a c e r t a i n amount o f common sense, so t h a t f o r example, i t \"knows when i t does or does not understand a p a r t i c u l a r request and i t can bypass a c t u a l d a t a base search in answering unreasonable r e q u e s t s . The system is c o n c e p t u a l l y simple and could be e a s i l y adapted to o t h e r data bases. 1. I n t r o d u c t i o n The prime o b s t a c l e f o r n o n t e c h n i c a l people who wish to use computers is the need to l e a r n a s p e c i a l language f o r communicating w i t h the mac h i n e . W e f e e l t h a t the time i s r i p e f o r n a t u r a l language systems which can be used by persons who are not t r a i n e d i n any s p e c i a l computer language. Such a system must embody a degree of \"common sense,\" must have a r e l a t i v e l y l a r g e and complete v o c a b u l a r y f o r the s u b j e c t m a t t e r t o b e t r e a t e d , must accept a wide range of grammatical c o n s t r u c t i o n s , and of course must be capable of p r o v i d i n g the i n f o r m a t i o n and computations requested by the user. In o r d e r to design such an a m b i t i o u s system, i t i s i m p e r a t i v e t h a t the u n i v e r s e o f d i s c o u r s e b e l i m i t e d i n some way. Th i s paper d e s c r i b e s work done on a n a t u r a l language q u e s t i o n a n s w e r i n g system f o r a data base c o n t a i n i n g d e t a i l e d records of U.S. Naval a i r c r a f t maintenance and f l i g h t i n f o r mation over a p e r i o d of time. While the s u b j e c t m a t t e r o f t h i s system i s t h e r e f o r e q u i t e c o n s t r a i n ed, we f e e l t h a t the issues we are c o n f r o n t i n g are nonetheless g e n e r a l . I n p a r t i c u l a r , w e w i l l des c r i b e the r e p r e s e n t a t i o n o f common sense i n f o r m a t i o n and procedures, problems of pronoun r e f e r e n c e and storage o f p a r t i a l r e s u l t s , the a b i l i t y t o answer vague o r p o o r l y d e f i n e d q u e s t i o n s , the a b i l i t y of the system to r e c o g n i z e when requests are meaningless or too p o o r l y f o r m u l a t e d to answer, as w e l l a s l i n g u i s t i c issues."
],
"authors": [
{
"name": [
"Jane J. Robinson"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"B. Grosz",
"Norman Haas",
"G. Hendrix",
"Jerry R. Hobbs",
"P. Martin",
"Robert C. Moore",
"Jane J. Robinson",
"S. Rosenschein"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"D. Warren"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"Robert C. Moore"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"D. Waltz"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"D. Waltz"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
}
]
}
],
"arxiv_id": [
null,
null,
null,
null,
null,
null
],
"s2_corpus_id": [
"17788520",
"11289202",
"42399194",
"18655604",
"62861335",
"8239353"
],
"intents": [
[
"methodology"
],
[],
[
"background"
],
[
"methodology"
],
[
"methodology"
],
[
"methodology"
]
],
"isInfluential": [
false,
false,
false,
false,
false,
false
]
} | - Problem: The paper addresses the challenge of constructing transportable natural-language interfaces for databases, focusing on the issue of bridging the gap between the database view (how data is stored) and the user's view (how an end-user thinks about the data).
- Solution: The hypothesis of the research is that the necessary information for adapting natural-language interfaces to different databases can be obtained through interactive dialogues with database management personnel unfamiliar with natural-language processing techniques. | 504 | 0.21627 | null | null | null | null | null | null | null | null |
208d4316fbe1e7d22afec64d04b0036b46c662b1 | 18150820 | null | Introducing {ASK}, A Simple Knowledgeable System | ASK, ~ ~imple Knowledgeable System, is a total system for the structuring, manipulation and communication of information. | {
"name": [
"Thompson, Bozenn H. and",
"Thompson, Frederick B."
],
"affiliation": [
null,
null
]
} | null | null | First Conference on Applied Natural Language Processing | 1983-02-01 | 0 | 49 | null | In contrast to expert systems, in which experts build the knowledge base and users make use of this expert knowledge, ASK is aimed at the user who wishes to create, test, modify, extend and make use of his own knowledge base. It is a system for a research team, a management or military staff, or a business office.This paper is designed to give you a feel for the general performance of the ASK System and overview of its operational capabilities.To Chin end, the movie you see will continue throughout the talk.Indeed, the talk itself is a commentary on this background movie.The movie is bona fide and in real time, it is of the ASK System in action.(Many of the illustrations from the movie are reproduced in the written paper.)To introduce you to ASK, we will start out with a few examples of queries of a simple data base concerning ships.The There are 2 answers:(1) New York (destination) ships 2 (2) New York (home port) ships 1 >How many ships are there with lnegth greater than 600 feet? Spelling correction: "lnegth" to "length" 4 >What ships that carry wheat go to London or Oslo?ships that carry wheat London Maru Oslo Alamo >Does the Maru carry wheat and go co London? yesAlthough in the terminology of data base theory, ASK can be considered as an "entityrelation" system, ASK retains its information in records which are interlinked in a semantic net. One reason we refer to ALE as simple is because ic We speak of this as the COAR structure.A~tributes are single valued, e.g., "father", "home port", "title";relations may be multiple valued, e.g., "child"~ "cargo", "author". The difference between attributes and relations can be seen in the following protocol.>What is the cargo and home port of the Maru? cargo home port wheat London >The home port of Maru is Boston. London has been replaced by Boston as the home port of Maru. >The cargo of Maru is coal.coal has been added as the cargo of Maru. >What is the cargo and home port of the Maru? cargo home port wheat BosCon coal ---C. Extendin K and Hodifyin~ the Dat~To make such a system more knowledgeable, one needs to be able co add definitions that embody interrelationships among the basic classes, objects, attributes and relations of the data. The simplest form of definition is synonym: >definition:tub:old ship Defined.Although this form of definition allows one to introduce abbreviations and many forms of jargon, more extensive forms of definition are desirable. Here are three illustrations using the same "ship" file as above.In the third definition, note the use of quotes to create local '~ariables". Most verbs embody knowledge specific to the application in which they are used, the exceptions being the copula verbs.Therefore the only verbs initially known to the ASK System are "to be" and "to have".The In practical systems for experts, abbreviated forms of addressing the computer are common. Thus the ability to handle pronominal and elliptical constructions are of considerable importance. Although there has been progress in the last few years in the linguistic understanding of these constructions, many difficulties remain. However, building on the work that ham been accomplished, many of these constructions can be handled by the ASK System. In order to avoid misleading the user when the computational algorithm does not make the correct interpretation, echo is used to inform the user of the interpretation that has been taken. Is some European port a port of Maru? There is no port. >London is Alamo's port.London has been added as the port of Alamo. >Is som European port a prot of Maru?Spelling corrections: "son" to "some" "prot"tO "port" There is no port of Maru. >New York is Maru's port. New York has been added as port of Maru. >Is some European port a port of Maru.Is some European port a port of Maru? noSo far we have illustrated ASK capabilities using only two types of objects:individuals, e.g., "John Jones", "Maru" numbers, e.g., "34.6 feet", "length of Maru", "number of ships". ASK has been designed, however, to facilitate many kinds of objects. This is a capability orthogonal to the simple COAR structure in that for any types of objects there may be corresponding classes, attributes and relations.We will illustrate this multiple object type capability with the additional object type: text. Once this new object type was added (together with procedures to manipulate texts, i.e., a "word processor") then text classes, individual/text and text/individual attributes and relations were immediately available.It was a small task to add an electronic mail system to ASK; all that was required was an addition to the authorization procedure that assigned to each newly authorized person a new text class as his/her mail box. Although the ASK System has been designed to allow the addition of new object types, this can be done only by an application programmer. The major obstacle is the necessity to provide a procedure to initialize instances of the new object type and procedures that carry out their intrinsic manipulation.However, we expect the addition of new object types to be a common occurrence in the applications of the ASK System. In any potential applicaion areas, using groups have accumulations of data already structured in specific ways and families of procedures that they have developed to manipulate these structures.In ASK, they can identify these data structures as a new object type, design simple syucax for them to invoke their procedures, and thus embed their familar objects and manipulations within the ASK English dialect and within the same context as other associated aspects of their tasks.The class, attributed and relation constructions become immediately available.The movie, which accompanied the oral presentation of this paper, demonstrated that the response rime, i.e., the time between completion of the typing of the input by the user Co the appearance of the response on the terminal, is very good.But In the terminology of ASK, a user "Context" is a knowledge base together with the vocabulary and definitions that S o with it. A given user will usually have several Contexts for various purposes, some of which may be the small "Ships" Context, a (truncated) bibliography of Artificial Intelligence literature and an administrative Context concerning budget matters. Several Contexts can be based on a given one, and one Context can be based on several, thus a hierarchical structure of Contexts can be realized. All Contexts are directly or indirectly based upon the BASE Context, which contains the function words and grammar of the ASK dialect of English, the mathematical and statistical capabilities, and the word processor.It is easy and fast to apply ASK to a new domain, given that a data base for this new domain is available in machine readable form. The vehicle is the ASK dialogue-driven Bulk Data Input capability, which can be called upon to build an existing database into one's Context.The result not only integrates this new data with that already in the Context and under the ASK dialect of English, but in many circumstances will make the use of this data more responsive to users" needs.The Bulk Data Input Dialogue prompts the user for necessary information to (i) establish the physical structure of the data base to be included, (2) add necessary classes and attributes as needed for the new data entries.The user also indicates, using English constructions, the informational relationships among the fields in the physical records of the database file that s/he wishes carried over to the ASK Context.Some have raised the question whether natural language is always the most desirable medium for a user to communicate with the computer. Many other cryptic ways to communicate user needs to a knowledgeable system can be thought of; often the most useful means will be highly specific to the particular application.For example, in positioning cargo in the hold of a ship, one would like to be able to display the particular cargo space, showing its current cargo, and call for and move into place other items that are to be included.In the past, enabling the system to respond more intelligently to the user's needs required the provision of elaborate programs since the user's tasks may be quite involved, with complex decision structures. The introduction of terse, effective communication has incurred lout delays and thus the changing needs of a user had little chance of being met. In the ASK System, the users themselves can provide this knowledge.They can instruct the system on how to elicit the necessary information and how to complete the required task. This ASK capability is quite facile, opening the way for its ubiquitous use in extending the knowledgeable responsiveness of the computer to user's immediate needs. ASK includes two systemguided dialogues, similar to the Bulk Data Input dialogue by which users can instruct the System on how to be more responsive to their needs.The Form is an efficient means of communication with which we are all familiar.A number of computer systems include a Forms package. For most of these, however, filling in a Form results only in a document; the Form does not constitute a medium for interacting with the knowledge base or controllin K the actions of the system.The ASK Forms capability enlarges the roles and ways in which Forms can be used as a medium for user interaction.As the user fills in the fields of a Form, the System can make use of the information being supplied to (1) check its consistency with the data already in the knowledge base and, if necessary, respond with a diagnostic, (2) fill in other fields with data developed from the knowledge base, (3) extend the knowledge base, adding to the vocabulary and adding or changing the data itself, (4) file the completed form in prescribed files or in those indicated by the user and also mail it to a specified distribution list through the electronic mail subsystem.Since the Form processing can check consistency and modify the knowledge base, Forms can be used to facilitate data input. Since Form processing can fill fields in the Form, the forms capability includes the functions of a report generator. Letters and memos can be written as special cases of Form filling, automatically adding dates, addresses, etc. and filing and dispatching the result.to add new Forms, if they are to be a convenient tool.That is the function of the Forms Designing Dialogue. Much like the Bulk Data Input Dialogue, the Forms Designing Dialogue holds a dialogue with the the user through which s/he can specify the fields of the Form itself and the processing of the above kinds to be automatically accomplished at the time the Form is filled in.Here is a simple example of a from that was designed using the Forms Designing Dialogue.>What is the bona port and co~ander of each old ship? There are 2 answers: In the day-by-day use of an interactive system, users are very often involved in repetitive tasks. They could be relieved of much of the drudgery of such tasks if the system were more knowledgeable. Such a knowledgeable system, as it goes about a task for the user, may need additional information from the user.EQUATIONWhat information it needs aCa particular point may depend on earlier user inputs and the current state of the database. natural language programming capabiltty. We hasten to add that it is not a general purpose program environment.It is for "ultra-high" level programming, gaining its programming efficiency through the assumption of an extensive vocabulary and knowledge base on which it can draw. The illustrative dialogue above, which adds'a new item to a bibliography, is an example of a simple dialogue designed using DDD. An HP9836 with an HP9725 disk was used in the illustrations in this paper.Our work is supported by the Hewlett Packard Corporation, Desktop Computer Division.The user must provide the system with knowledge of a particular cask; more precisely s/he must program this knowledge into the system. The result of this programming will be a system guided dialogue which the user can subsequently initiate and which will then elicit the necessary inputs. Using these inputs in conjunction with the knowledge already available, particularly the data base, the system completes the task. It is this system-guided dialogue chat the user needs to be able to design.In the ASK System, there is a special dialogue which can be used co design system guided dialogues Co accomplish particular casks. We call this the Dialogue Designing Dialogue (DDD). Using DDD, the user becomes a computer-aided designer. Since DDD, in conducting its dialogue with the user, only requires simple responses or responses phrased in ASK English, the user need have little programming skill or experience.Using DDD, the user alone can replace a tedious, repetitive cask with an efficient system guided dialogue, all in a natural language environment. The ASK Dialogue Designing Dialogue constitutes a high level, | null | null | null | null | Main paper:
anaohora: pronouns and ellinses:
In practical systems for experts, abbreviated forms of addressing the computer are common. Thus the ability to handle pronominal and elliptical constructions are of considerable importance. Although there has been progress in the last few years in the linguistic understanding of these constructions, many difficulties remain. However, building on the work that ham been accomplished, many of these constructions can be handled by the ASK System. In order to avoid misleading the user when the computational algorithm does not make the correct interpretation, echo is used to inform the user of the interpretation that has been taken. Is some European port a port of Maru? There is no port. >London is Alamo's port.London has been added as the port of Alamo. >Is som European port a prot of Maru?Spelling corrections: "son" to "some" "prot"tO "port" There is no port of Maru. >New York is Maru's port. New York has been added as port of Maru. >Is some European port a port of Maru.Is some European port a port of Maru? noSo far we have illustrated ASK capabilities using only two types of objects:individuals, e.g., "John Jones", "Maru" numbers, e.g., "34.6 feet", "length of Maru", "number of ships". ASK has been designed, however, to facilitate many kinds of objects. This is a capability orthogonal to the simple COAR structure in that for any types of objects there may be corresponding classes, attributes and relations.We will illustrate this multiple object type capability with the additional object type: text. Once this new object type was added (together with procedures to manipulate texts, i.e., a "word processor") then text classes, individual/text and text/individual attributes and relations were immediately available.It was a small task to add an electronic mail system to ASK; all that was required was an addition to the authorization procedure that assigned to each newly authorized person a new text class as his/her mail box. Although the ASK System has been designed to allow the addition of new object types, this can be done only by an application programmer. The major obstacle is the necessity to provide a procedure to initialize instances of the new object type and procedures that carry out their intrinsic manipulation.However, we expect the addition of new object types to be a common occurrence in the applications of the ASK System. In any potential applicaion areas, using groups have accumulations of data already structured in specific ways and families of procedures that they have developed to manipulate these structures.In ASK, they can identify these data structures as a new object type, design simple syucax for them to invoke their procedures, and thus embed their familar objects and manipulations within the ASK English dialect and within the same context as other associated aspects of their tasks.The class, attributed and relation constructions become immediately available.The movie, which accompanied the oral presentation of this paper, demonstrated that the response rime, i.e., the time between completion of the typing of the input by the user Co the appearance of the response on the terminal, is very good.But In the terminology of ASK, a user "Context" is a knowledge base together with the vocabulary and definitions that S o with it. A given user will usually have several Contexts for various purposes, some of which may be the small "Ships" Context, a (truncated) bibliography of Artificial Intelligence literature and an administrative Context concerning budget matters. Several Contexts can be based on a given one, and one Context can be based on several, thus a hierarchical structure of Contexts can be realized. All Contexts are directly or indirectly based upon the BASE Context, which contains the function words and grammar of the ASK dialect of English, the mathematical and statistical capabilities, and the word processor.It is easy and fast to apply ASK to a new domain, given that a data base for this new domain is available in machine readable form. The vehicle is the ASK dialogue-driven Bulk Data Input capability, which can be called upon to build an existing database into one's Context.The result not only integrates this new data with that already in the Context and under the ASK dialect of English, but in many circumstances will make the use of this data more responsive to users" needs.The Bulk Data Input Dialogue prompts the user for necessary information to (i) establish the physical structure of the data base to be included, (2) add necessary classes and attributes as needed for the new data entries.The user also indicates, using English constructions, the informational relationships among the fields in the physical records of the database file that s/he wishes carried over to the ASK Context.Some have raised the question whether natural language is always the most desirable medium for a user to communicate with the computer. Many other cryptic ways to communicate user needs to a knowledgeable system can be thought of; often the most useful means will be highly specific to the particular application.For example, in positioning cargo in the hold of a ship, one would like to be able to display the particular cargo space, showing its current cargo, and call for and move into place other items that are to be included.In the past, enabling the system to respond more intelligently to the user's needs required the provision of elaborate programs since the user's tasks may be quite involved, with complex decision structures. The introduction of terse, effective communication has incurred lout delays and thus the changing needs of a user had little chance of being met. In the ASK System, the users themselves can provide this knowledge.They can instruct the system on how to elicit the necessary information and how to complete the required task. This ASK capability is quite facile, opening the way for its ubiquitous use in extending the knowledgeable responsiveness of the computer to user's immediate needs. ASK includes two systemguided dialogues, similar to the Bulk Data Input dialogue by which users can instruct the System on how to be more responsive to their needs.The Form is an efficient means of communication with which we are all familiar.A number of computer systems include a Forms package. For most of these, however, filling in a Form results only in a document; the Form does not constitute a medium for interacting with the knowledge base or controllin K the actions of the system.The ASK Forms capability enlarges the roles and ways in which Forms can be used as a medium for user interaction.As the user fills in the fields of a Form, the System can make use of the information being supplied to (1) check its consistency with the data already in the knowledge base and, if necessary, respond with a diagnostic, (2) fill in other fields with data developed from the knowledge base, (3) extend the knowledge base, adding to the vocabulary and adding or changing the data itself, (4) file the completed form in prescribed files or in those indicated by the user and also mail it to a specified distribution list through the electronic mail subsystem.Since the Form processing can check consistency and modify the knowledge base, Forms can be used to facilitate data input. Since Form processing can fill fields in the Form, the forms capability includes the functions of a report generator. Letters and memos can be written as special cases of Form filling, automatically adding dates, addresses, etc. and filing and dispatching the result.to add new Forms, if they are to be a convenient tool.That is the function of the Forms Designing Dialogue. Much like the Bulk Data Input Dialogue, the Forms Designing Dialogue holds a dialogue with the the user through which s/he can specify the fields of the Form itself and the processing of the above kinds to be automatically accomplished at the time the Form is filled in.Here is a simple example of a from that was designed using the Forms Designing Dialogue.>What is the bona port and co~ander of each old ship? There are 2 answers: In the day-by-day use of an interactive system, users are very often involved in repetitive tasks. They could be relieved of much of the drudgery of such tasks if the system were more knowledgeable. Such a knowledgeable system, as it goes about a task for the user, may need additional information from the user.EQUATIONWhat information it needs aCa particular point may depend on earlier user inputs and the current state of the database. natural language programming capabiltty. We hasten to add that it is not a general purpose program environment.It is for "ultra-high" level programming, gaining its programming efficiency through the assumption of an extensive vocabulary and knowledge base on which it can draw. The illustrative dialogue above, which adds'a new item to a bibliography, is an example of a simple dialogue designed using DDD. An HP9836 with an HP9725 disk was used in the illustrations in this paper.Our work is supported by the Hewlett Packard Corporation, Desktop Computer Division.The user must provide the system with knowledge of a particular cask; more precisely s/he must program this knowledge into the system. The result of this programming will be a system guided dialogue which the user can subsequently initiate and which will then elicit the necessary inputs. Using these inputs in conjunction with the knowledge already available, particularly the data base, the system completes the task. It is this system-guided dialogue chat the user needs to be able to design.In the ASK System, there is a special dialogue which can be used co design system guided dialogues Co accomplish particular casks. We call this the Dialogue Designing Dialogue (DDD). Using DDD, the user becomes a computer-aided designer. Since DDD, in conducting its dialogue with the user, only requires simple responses or responses phrased in ASK English, the user need have little programming skill or experience.Using DDD, the user alone can replace a tedious, repetitive cask with an efficient system guided dialogue, all in a natural language environment. The ASK Dialogue Designing Dialogue constitutes a high level,
:
In contrast to expert systems, in which experts build the knowledge base and users make use of this expert knowledge, ASK is aimed at the user who wishes to create, test, modify, extend and make use of his own knowledge base. It is a system for a research team, a management or military staff, or a business office.This paper is designed to give you a feel for the general performance of the ASK System and overview of its operational capabilities.To Chin end, the movie you see will continue throughout the talk.Indeed, the talk itself is a commentary on this background movie.The movie is bona fide and in real time, it is of the ASK System in action.(Many of the illustrations from the movie are reproduced in the written paper.)To introduce you to ASK, we will start out with a few examples of queries of a simple data base concerning ships.The There are 2 answers:(1) New York (destination) ships 2 (2) New York (home port) ships 1 >How many ships are there with lnegth greater than 600 feet? Spelling correction: "lnegth" to "length" 4 >What ships that carry wheat go to London or Oslo?ships that carry wheat London Maru Oslo Alamo >Does the Maru carry wheat and go co London? yesAlthough in the terminology of data base theory, ASK can be considered as an "entityrelation" system, ASK retains its information in records which are interlinked in a semantic net. One reason we refer to ALE as simple is because ic We speak of this as the COAR structure.A~tributes are single valued, e.g., "father", "home port", "title";relations may be multiple valued, e.g., "child"~ "cargo", "author". The difference between attributes and relations can be seen in the following protocol.>What is the cargo and home port of the Maru? cargo home port wheat London >The home port of Maru is Boston. London has been replaced by Boston as the home port of Maru. >The cargo of Maru is coal.coal has been added as the cargo of Maru. >What is the cargo and home port of the Maru? cargo home port wheat BosCon coal ---C. Extendin K and Hodifyin~ the Dat~To make such a system more knowledgeable, one needs to be able co add definitions that embody interrelationships among the basic classes, objects, attributes and relations of the data. The simplest form of definition is synonym: >definition:tub:old ship Defined.Although this form of definition allows one to introduce abbreviations and many forms of jargon, more extensive forms of definition are desirable. Here are three illustrations using the same "ship" file as above.In the third definition, note the use of quotes to create local '~ariables". Most verbs embody knowledge specific to the application in which they are used, the exceptions being the copula verbs.Therefore the only verbs initially known to the ASK System are "to be" and "to have".The
Appendix:
| null | null | null | null | {
"paperhash": [],
"title": [],
"abstract": [],
"authors": [],
"arxiv_id": [],
"s2_corpus_id": [],
"intents": [],
"isInfluential": []
} | null | 504 | 0.097222 | null | null | null | null | null | null | null | null |
333abebd4496e855a26706650c4c305f4fa9a6d3 | 10164113 | null | How to Drive a Database Front End Using General Semantic Information | This paper describes a front end for natural language access to databases making extensive use of general, l~. domain-independent, semantic information for question interpretation. In the interests of portability, initial syntactic and semantic processing of a question is carried out without any reference to the database domain, and domain-dependent operations are confined to subsequent, comparatively straightforward. processing o£ the initial interpretation. The different modules of the front end are described, and the system's performance is illustrated by examples. I I~TRODUC'TION We believe that there is a lot of mileage to be got from non-task-specific semantic analysis of user requests, because their resulting rich, explicit, and ncrmalised meaning representations are a ~ccd | {
"name": [
"Boguraev, B.K. and",
"Sparck Jones, K."
],
"affiliation": [
null,
null
]
} | null | null | First Conference on Applied Natural Language Processing | 1983-02-01 | 11 | 21 | null | null | null | This paper describes a front end for natural language access to databases making extensive use of general, l~. domain-independent, semantic information for question interpretation.In the interests of portability, initial syntactic and semantic processing of a question is carried out without any reference to the database domain, and domain-dependent operations are confined to subsequent, comparatively straightforward. processing o£ the initial interpretation. The different modules of the front end are described, and the system's performance is illustrated by examples.Following the developmemt 0£ various front ends for natural language access to databases, it is now generally agreed that such a front end must utillse at least three different kinds of knowledge to accomplish its task: linguistic k~owledge, knowledge of the domain of discourse, and knowledge of the organlsational structure of the database. Thus broadly speaking, a user request to the database goes through three conceptually different forms: the output of linguistic analysis o£ the question, its representation in terms of the domain's conceptual schema, and its interpretation in the database access language. Early natural language front ends usually did not have a clearcut separation between the different stages of the process: for example LUNAR (Woods 1972 ) merged the domain model and the database model into one, and systems such as the early incarnation of LADDER (Hendrix et al 1978) and PLANES (Waltz 1978) made heavy use of semantic grammars with their domain-dependent lexicons ccmbinin8 linguistic kncwledge with domain knowledge and so merging the first two stages. None 0£ these systems, moreover, made any significant use of ~eneral, as opposed to domain-specific, semantic information.In an attempt to achieve portability from one database to another, mcst current systems adhere to a ~eneral framework (Konolige 1979) , which makes a clear distinction between the different processing phases and distinguishes the domain-dependent from the domaln-independent parts of the front end, and also domain operations from database management cperatlons. However semantic processing is still This work is supported by the U.K. Science and Engineering Research Council. 8t essentially driven by domain-dependent semantics. Linguistic processing is therefore primarily syntactic parsing, and relating general linguistic to specific domain knowledge within the framework of a modular front end takes the form of applying domain-dependent semantic processing to the output of the syntactic parser. This may be done in a slmple, minded way as in PHLIQAI (Bronnenberg et al 1979) and T~ (Damerau 1980) , or by providing hooks in the syntactic representation (domain-independent calls to semantic operators which will evaluate differently in dl£ferent contexts), as in DIALOGIC (Grosz et ai 1982) . In either case the usual unhappy consequence o£ separating syntactic and semantic processing, namely the hassle of manipulating alternative syntactic trees, follows. Furthermore, changlngdomalns implies changing the definitions of the semantic operators, which are procedural in nature, while it may be preferable to keep the domain-dependent parts of the front end in declarative form, as is indeed done in (Warren and Pereira 1981) .Thus in systems of this by now conventional type, the 'portability' achieved by confining the necessary domain-dependent semantic processing to welldefined modules is purchased at the heavy price of limiting the early linguistic processing to syntax, and, perhaps, some very global and undiscriminating semantics (see for example the sccping algorithm of (Grosz et al 1982)).Our objective is to do better than this by making more use of powerful, but still non-domain-dependent semantics in the front-end linguistic analysis. Doing this should have two advantages: restraining syntax, and providing a good platform for domaindependent semantic processing. However, the overall architecture of the front end still follows the Konolige model in maintaining a clearcut separation between the different kinds of knowledge to be utilised, keeping the bulk of the domain-dependent knowledge in declarative form, and attempting to minimlse the consequences of changes in the front end environmant, whether of domain or database model, to promote s~ooth transfers cf the front end from one back end database management system to another.We believe that there is a lot of mileage to be got from non-task-specific semantic analysis of user requests, because their resulting rich, explicit, and ncrmalised meaning representations are a ~ccd starting point for subsequent task-specific operations, and specificall~ are better than either syntax trees, or the actual input text of e.g. the PLANES approach. Furthermore, since the domain world is (in some sense) a subset of the real world, it is possible to interpret descriptions of it using the same semantic apparatus and representation language as is used by the natural language analyser, which should allow easy and reliable linking of the natural language input words, domain world objects and relationships and data language terms and expressions. Since the connections between these do not appear hard-wired in the lexicon, but are established on the basis of matching rich semantic patterns, no changes at all should be required in the lexicon as the application moves from one domain or database to another, only expansions to allow for the semantic definitions of new words relevant to the new application.The approach leads to an overall front end structure as follows: Each process in the diagram above operates cn the output of the previous one. Processes I and 2 constitute the analysis phase, and processes 3 and the translation phase. Such a system has essentially been constructed, and is under active test; a detailed acccunt cf its components and operations follows.For the purposes of illustration we shall use questions addressed to the Suppliers and Parts relational database of (Date 1977). This has three relaticns with the following structure: Supplier(Snc, Shame, Status, Scity), Part(Pno, Pname, Colour, Weight, Pcity), and Shipments(Sno, Pnc, Quantity).A. The Anal)metThe natural language anal l met has been described in detail elsewhere (Boguraev 1979) , (Boguraev and Sparck Jones 1982) , and only a brief summary will be presented here. It has been designed as a general purpose, domain-and task-independent language processor, driven by a fairly extensive llnguistlcally-motivated grammar and controlled in its operation by variegated application cf a rich and powerful semantic apparatus. Syntacticallycontrolled constituent identification is coupled with the Judgemental application cf semantic specialists:following the evaluation of the semantic plausibility of the constituent at hand, the currently active processor either aborts the analysis path or constructs a meaning representation for the textual unit (noun phrase, ccmplementiSero embedded clause, etc.) for incorporation into any larger semantic construct. The philosophy behind the anal yser is that syntactlcally-drlven analysis (which is a major prerequisite for domain-and/or task-independence) is made efficient by frequent and timely calls to semantic specialists, which both control blind syntactic backtracking and construct meaning representations for input text without going through the potentiall y costly enumeration of intermediate syntactic trees. The analyser can therefore operate smoothly in environments which are syntactically or lexically hlghiy ambiguous.To achieve its objectives the program pursues a passive parsing strategy based on semantic pattern matching of the kind proposed by (Wilks 1975) . Thus the semantic specialists work with a range of patterns referring to narrower or broader word classes, all defined using general semantic primitives and ultimately depending on formulae which use the primitives to characterise individual word senses. However the application of patterns in the search for input text meaning is mcre effectively controlled by syntax in this system than in Wilks'.The particular advantages of the approach in the database application context are the powerful and flexible means of representing linguistic and world knowledge provided by the semantic primitives, and the ease with which 'traps for the unexpected' can be procedurally encoded. The latter means that the system can readily deal with the kinds cf problems generated by unconstrained natural language text which provoke untoward 'ripple' effects when large semantic grammars are mcdified. For present purposes, the form and ccntent cf the outputs of the natural language analyser are more important than the means by which they are derived (for these see Boguraev and Sparck Jones 1982). The meaning representations output by the analyser are dependency structures with clusters of case-labelled components centred around main verb or noun elements. Apart from the structure of the dependency tree itself, and group identifying markers like 'ins' and 'modallty', the substantive information in the meaning representation is provided by the case labels, which are drawn from a large set of semantic relation primitives forming part of the overall inventory of primitives, and by the semantic category primitive characterisations of lexicallyderived items.The formulae charaoterislng word senses may be quite rich. The fairly straightforward characterisation of 'supplier1', representing one sense of "supplier" is (Supplier ...( supplier 1 (~(ee~t obJe) give) (subJ CorK)) ...), meaning approximately that some sort of organisatton (which may reduce to an individual) gives entities. The meaning representation for the whole sentence "Suppliers live in cities" (with the formulae for individual units abbreviated, for space reasons, to their head primitives) is( el ause ........ (v (livel ... be I @@agent (n (supplierl ... am))) ee~oca~ion (n (city2 ... spread)))))), where ~and @location are case labels. "The parts are coloured red" will be analysed as( el ause ...... (v (be2 ... be thin in tpartl ... mennK)))yl(@@number (@~state ~:~ <colourl ... sign) (val (red1 ... sense))))))), and "Who supplies green parts?" will give rise to the structure:(clause ... (type question) (v (supplyl ... 81ve (@@agent (n (query (d~y)))) ~race (clause V agent)) (clause (v (be2 ... be (@@@gent £n <partl ... ~InS))) (@@state (st (eolourl ... sign) (gr, eenl ... , tsee ~.se))))))))))))).As these examples sho~ the anal yser's representations combine expressive power with structural simplicity. Further, the power of the semantic category primitives used to identify text message patterns means that it is possible to achieve far mcre semantic analysis cf a question, far earlier in the frcnt end processing, than can be achieved with frcnt ends conforming tc the Koncllge model. The effectiveness cf the anal yser as a general natural-language prccesslng device has been demcnstrated by its successful application to a range of natural language processing tasks. There is, however, a price to pay, in the database context, for its generality. Natural language makes ocn=acn use of vague concepts ("have", "do"), almost content-empty markers ("be e, "of"), and opaque constructions such as compound nouns. Clearl~ front ends where domainspecific information can provide leverage in interpreting these input text items have advantages. and it is not clear how a principled solution to the problems they present can be achieved within the framework of a general-purpose anal yser of the kind described.To provide a domain-specific interpretation of, for example, compounds like "supplier city", an interface would have to be provided oharaeterising domain k~owledge in the semantic terms familiar to the parser, and guaranteeing the provision of explicit structural charaoterlsations of the text constituent which would be available for further exploitation by the parser.To avoid invoking domain knowledge in this way in analysis we have been obliged to accept questicn interpretations which are incomplete in limited respects. That is, we push the ordinary semantic analysis procedures as far as they will go, accepting that they may leave 'dummy' markers in the dependency structure and compound nominals with ambiguous member words and no explicit extracted structure. though not yet domain-and databaseoriented, processing. Imposing domain world and database organisatlon restrictions on the question at this stage would be premature, since it cculd ecmplloate or even inhibit possible later inference operations. The idea cf providing a system ccmponent addressing a general linguistic task, withcut throwing away any detailed information not in fact needed for scme specific instance cf that task, like natural language distinctions between quantifiers ignored by the database system, is also an attractive one.The extractor thus emphasises the fact that the input text is a questicn, but carries the detailed semantic information provided by the analyser forward fcr exploitation in the translation phase cf the processing.A gccd way to achieve a question formulation abstracted from the low-level crganisaticn cf the database is to interpret the user's input as a formal quer~ However our extractor, unlike the equivalent processors described by (Wocds 1972 The logic representation of the question which is output by the extractor highlights the search aspects cf the input, formalising them so that the subsequent processes which will eventually generate the search specification for the database management system can locate and focus on them easily; at the same time, the semantic richness of the original meaning representation is maintained to facilitate the later domain-crlented translation operations.The syntax of the logic representation closely follc~ that defined by (Wocds 1978) :(For <quantifier> <variable> / <range> : <restrictions on variable> -<prcpcslticn> ),where each cf the restrictions, or the proposition, can themselves be quantified expressions. The rationale for such quantified expressions as media for questions addressed towards an abstract database has been discussed by Woods. As we accept this, we have developed a transformation procedure which takes the meaning representation of an input question and ccnstructs a corresponding logic representation in the form just described. Thus for the question "Who supplies green parts?" analysed in Section A, we obtain (For Every SVarl / query:(For Every $Var2 / part1 : (cclourl $Var2 8reenl) -(supply1SVarl SVar2)) (Display SVarl)).where the lexically-derived items indicating the ranges of the quantified variables ('query', 'part1'), the relationships between the variables ('supply1') and the predicates and predicate values ('cclcur1', 'green I') in fact carry along wltb them their semantic formulae: these are omitted here, and in the rest cf the paper, to save space.The extractor is geared to seek, in the analyser's dependent y structures, the simple prc positicns (atomic predications) which make up the logic representaticn.Follcwing the philcscphy cf the semantic thecry underlying the analyser design, these simple prcpositicns are identified wlth the basic messages, i.e. semantic patterns, which drive the parser and are expressed in the meaning representations it produces as verb and noun group clusters of case-related elements.In order to 'unpack' these, the extractor iccks for the sources cf atomic predicates as 'SVO' triples, identifiable by a verb (cr ncun) and its case rcle fillers, which can be extracted quite naturally in a straightforward way from the dependency structure.Depending bcth cn the semantic characterisaticn cf the verb and its case arguments, and cn the semantic context as defined by the dependency tree, the triples are categcrised as belcnging to cne cf two types:[$ObJ SLink $ObJ]. or [$Obj SPoss SPrcp].where the $Obj, SLink. or $Prcp items are further characterised in semantic terms. It is clear that the 'basic messages' that the extractor seeks to identify as a preliminary step tc ccnstructing the logic representation define either primitive relationships between objects, cr properties of those same cbjects. Thus the meaning representation for "part suppliers" will be unpicked as a 'dummy' relationship between "suppliers" and "parts", i.e. as[$ObJ1(supplierl) $Link1(dummy) $Obj2(partl)].while "green parts" will be interpreted as[$Obj2(part 1) $Poss(be2) SProp(colourl =green 1) ].Larger constructs can be similarly deocmpcsed: thus "Where do the status 32 red parts suppliers live?" will be broken down into the following set of triples:[$ObJl(supplierl) SLinkl(livel) $ObJS(query)] & [$Objl(supplierl) SLink2(dummy) $Ob~2(partl)] & [$Objl(supplierl) SPossl(be2) $Prcpl(status=32) ] & [$Obj2(partl) SPcss2(be2) $Prcp2(cclcurl=redl)J.It must be empbasised that while there are parallels between these structures and those of the entityattribute approach to data modelling, the forms cf triple were chosen without any reference to databases. As noted earlier, they naturally reflect the form of the 'atomic propositions', i.e. basic messages, used as semantic patterns by the natural language anal yser.For completeness, the triples underlying the earlier question "Who supplies green parts?" are[$Obj1(query=identity) $Llnkl(supplyl) $Ob32(partl)] & [$Obj2(part 1) $Possl(be2) $Prcpl(cclcurl=greenl)]The sets cf interconnected triples are derived from the meaning representations by a fairly simple recursive prccedure. The icgic representaticn defines the logical content and structure cf the information the user is seeking. It may, as ncted, be inccmplete at pcints where domain reference is required, e.g. in the interpretation cf compound ~cuns; but it carries along, tc the translator, the very large amcunt cf semantic information provided by the case labels and formulae of the meaning representation, which should be adequate to pinpoint the items sought by the user and tc describe them in terms suited to the database management system, so they may be accessed and retrieved.In the process of transforming the semantic content of the user's question into a low-level search representation geared to the administrative structure of the target database, it is necessary to reconcile the user's view of the world with the domain model. Before even attempting to construct, Say, a relational algebra expression to be interpreted by the back-end database management system, we must try to interpret the semantic content of the loKlc representation with reference to the se~emt cr variant of the real world modelled by the database.An obvious possibility here is to proceed directly from the variables and predications of the Icglc representation to their database counterparts. For example,svarl/supplierl (Bin) SVar2/partl (t~t~)) can be mapped directly onto a relation Shipments in the Suppliers and Parts database. The mapping could be established by reference to the lexicon and to a schedule of equivalences between logical and database structures. This approach suffers, however, from severe problems: the most important is that end users do not necessarily constrain their natural language to a highly limited vocabulary. Even in the simple context of the ~,ppliers and Parts database, it is possible to refer to "firms", "goods", "buyers", "sellers", "provisions", "customers", etc. In fact, it was precisely in order to bring variants under a common denominator that semantic grammars were employed. We, in contrast, have a more powerful, because more flexible, semantic apparatus at our disposal, capable of drawing out the similarities between "firms", "sellers", and "suppllers", as opposed to taking them as read. Thus a general semantic pattern which will match the dictionary definitions cf all of these words is (((neat obJm) give) (~bJ |org) ). Furthermore, if instead of attempting to define any sort of direct mapping between the natural language terms and expressions of the user and corresponding domain terms and expressions, we concentrate on finding the common links between them, we can see that even though the domain and, in turn, database terms and expression= may not mean exactly the same as their natural language relatives or sources, we should be able to detect overlaps in their semantic characterlsatlons. It is unlikely that the same cr similar words will be used in both natural and data languages if their meanings have ncthing in ccmmcn, even if they are not identical, so characterising each using the same repertoire of semantic primitives shculd serve to establish the link~ between the two. Thus, for example, one sense of the natural language word "iccaticn"will have the formula (this (where spread) ) and the data language word "&city" referring to the domain object &city will have the formula (((man folk) wrap) (wl~re spread)), which can be connected by the common constituent (~re spread).One distinctive feature of our front end design, the use of general semantics for initial question interpretation, iS thus connected with ancther: the more stringent requirements imposed on natural lanKusge to data language translation by the initial unconstrained question interpretation can be met by exploiting the resources for language meaning representation initially utilised for the natural language question interpretation. We define the domain world modelled by the database using the same semantic apparatus as the one used by the natural language front end processor, and invoke a flexible and sophisticated semantic pattern marcher tc establish the connection between the semantic content of the user question (which is carried over in the logic representation) and related ccncepts in the domain world. Taking the next step from a domain world concept or relationship between domain world obJants to their direct model in the administrative structure of the database is then relatively easy.Since the domain world is essentially a closed world restricted in sets if not in their members, it is possible to describe it in terms of a limited set of concepts and relationships: we have possible properties of objects and potential relationships between them. We can talk about &suppliers and &parts and the important relationship between them, namely that &suppliers &supply &parts. We can also specify that &suppliers &llve in &cities, &parts can be &n,-bered, and so on.We can thus utillse, either explicitly or implicitly, a description of the domain world which could be represented by dependency structures llke those used for natural language. The important point about these is the way they express the semantic content of whole statememts about the domain, rather than the way they label individual domaln-referrlng terms as, e.g. "&supplier" or "&part". It is then easy to see how the logic representation for the question "What are the numbers of the status 30 suppliers?", name1 y (For Every Syarl./suppllerl :(statusl $Varl 30) - (Dlap~ay tnum~rl $Varl))),can be unpacked by semantic pattern matching routines to establish the ccnnecticn between "supplier 1" and "&supplier", "number 1" and "&number", and so on. In the same way the lcgic representations for "From where does Blake operate?" and "Where are screws found?" can be analysed for semantic content which will establish that "Blake" is a &supplier, "operate" in the context cf the database domain means &supply, and "where" is a query marker acting fcr &city from which the &supplier Blake &supplies (as opposed to street corner, bucket shop, or crafts market); similarly, "screW' is an instance of &part and the cnly iccational information associated with &parts in the database in question is the &city where they are stored. All this becomes clear simply by matching the underlying semantic primitive definitions of the natural language and domain world words, in their propositional contexts.The translator is alac the module where domain reference is brought in tc complete the interpretation cf the input question where this cannot be fully interpreted by the analyser alcne. (&live $Var1(&supplier) $Var3(&clty)), while translating the logic representation for the example question "Who supplies green parts?" gives the query representation (For Every SVarl/&suppller:(For Every $Var2/&part : (&cclour iVar2Kreen) -(&supply $Varl SVar2)) - (Display $Varl)).Apart from the fact that semantic pattern matching seems to cope quite successfully with unexpected inputs ('unexpected' in the sense that in the alternative approach nc mapping function would have been defined for them, thus implying a failure to parse and/or interpret the input question), having a general natural language analyser at our disposal offers an additional bonus: the description of the domain world in terms of semantic primitives and primitive patterns can be generated largely automatically, since the domain world can be described in natural language (assuming, of course, an apprcpriate lexicon of domain world Words and definitions) and the descriptions simply analysed as utterances, producing a set of semantic structures which can subsequently be prccessed to cbtaln a repertoire of domain-relevant forms to be exploited fcr the matching procedures.Having identified the domain . terms and expressions, we have a high-level database equivalent cf the original English question. A substantial amcunt cf processing has pinpointed the question focus, has eliminated potential ambiguities, has resolved domain-dependent language ccnstructicns, and has provided fillers for 'dummy' or 'query' items. Further, the system has established that "London" is a &city, for example, cr that "Clark" is a specific instance of &supplier. The processing now has to make the final transition to the specific fcrm in which questions are addressed to the actual database management system. The semantic patterns cn which the translator relies, for example defining a domain word "&supplier" as (((cent obje) give) (subJ IorK)), while adequate encugh tc deduce that Clark is a &supplier, are not informative enough to suggest how &suppliers are modelled in the actual database.Again, the cbvious approach to adopt here is the mapping one, so that, for instance, we have: &supplier :=> relation Supplier Clark ==> tuple of relation Supplier such that Shames"Clark"But this approach suffers from the same limitations as direct mapping from logic representation tc search representation; and a mcre flexible apprcach using the way the database mcdels the domain world has been adopted.In the previous section we discussed how the translator uses an inventory of semantic patterns to establish the connection between natural language and domain world words. This inventory is not, however, a flat structure with no internal organisatlon.On the ccntrar~ the semantic information about the domain world is crganised in such a way that it can naturally be associated with the administrative structure cf the target database, For example in a relational database, a relation with tuples over domains represents properties of. cr relationships between, the objects in the domain world. The objects, properties and relationships are described by the semantic apparatus used for the translator, and as they also underlie, at not toc great remove, the database structure, the domain world concepts or predications of the query representation act as pointers into the data structures cf the database administrative crganlsatlon.For example, given the relation supplier over the domains S~ame, Snc. Status and Scity. the semantic patterns which describe the facts that in the domain world &suppliers &have &status, &numbers, &names and &live in &cities are crcsslinked, in the sense that they have the superstructure cf the database relation .Supplier imposed over them. We can thus use them to avoid explicit mapping between query data references and template relaticnal structures for the database. From the initial meaning representation for the question fragment "... Clark, who has status 30 ..." through to the query representation, the semantic pattern matching has established that Clark is an instance cf &supplier, that the relationship between the generic &supplier and the specific instance of &supplier (i.e. Clark) is that cf &name, and that the query is focussed cn his &status (whose value is supplied explicitly). Now from the position of the query predication (&status &supplier 30) in the characterisaticn cf the relaticn Supplier, the system will be able tc deduce that the way the target database administrative structure models the question's semantic ccntent is as a relation derived from Supplier with "Clark" and "30" as values in the columns Shame and Status respectlvely.The convertor thus employs declarative knowledge about the database organisaticn and the correspondence between this and the domain world structure to derive a generalised relational algebra expression which is an interpretation cf the formal query in the context of the relational database model of the domain. We have chosen to gear the convertor towards a generallsed relational algebra expression, because both its simple underlying definition and the generality of its data structures within the relational model allow easy generation of final low-level search representations for different specific database access systems.To derive the generallsed relational algebra form of the question from the query representation, the convertor uses its k~owledge of the way domain objects and predications are modelled in the database to establish a primary or derivable relation for each of the'quantifled variables of the query representation. These constituents of the algebra expression are then combined, with an appropriate sequence of relational operators, to obtain the complete expression.The basic premise of the convertor is that every quantified variable in the formal representation can be associated with some primary or computable relation in the target database; restrictions on the quantified variables specify how, with that relation as a starting point, further relational algebra computations can be performed to mcdel the restricted variable; the process is recurslve, and as the query representation is scanned by the convertor, variables and their associated relational algebra expressions are bound by an 'environmemttype' mechanism which provides all the necessary information to 'evaluate' the propositions of the quer~ Thus ccnverslon is evaluating a predicate expression in the context of its semantic interpretation in the domain ~rld and the envlronmemt of the database • models for its variables.For example, given the query representation fragment for the phrase "... all London suppliers who supply red parts ..", namely (For Every SVarl/&supplier :(AND (For The $Var3/London -(&live SVarl SVar3)) (For Every SVar2/&part : (&cclcur SVar2 red) -(&supply $VarlSVar2))) .... SVarl will initially be bound to the primary relation .Supplier, which will be subsequently restricted to those tuples Where Sctty is equal to "London". Slmllarl~ $Var2 will be associated with a partial relation derived from Part, for which the value of Colcur is "red". Evaluating the prcposltion (&supply SVarl $Var2). whose dcmain relationship Is mcdelled in the database by Shipments, will in the envlrcnment of $Varl and SVar2 yield the relational expression (jcin I select .Suppller where Seity equals "London") j91n Shlpmen~s ~select Part where Colcur equals "red"))).At this point, the information that the user wants has been described in terms of the target relational database: names cf files, fields and columns. The search description has, however, still to be given the specific form required by the back-end database management system. This is achieved by a fairly straightforward application of standard ccmplling techniques, and does not deserve detailed discussicn here. At present we can generate search specifications in three different relational search languages. Thus the final form in the local search language Salt of the example question "Who supplies green parts?" is list (Part:Colour="green"• (Supplier • Shipments)) 87 V IMPLEMENTATIONAll of the modules have been implemented (in LISP). The convertor is at present restricted to relational databases, and we would like to extend it to other models. The system has so far been tested cn Suppliers and Parts, which is a toy database from the point of view of scale and complexity, but which is rich enough to allow questions presenting challenges tO the general semantics approach to question interpretation.To illustrate the performance of the front end. we show below the query representations and final search representations for some questions addressed to this database. Work is currently in progress to apply the front end to a different (relational) database containing planning information: this simulates IBM's TQA database (Damerau 1980). Most of the work in this is likely to come in writing the lexical entries needed for the new vocabulary. Longer term developments include validating each step of the translation by generating back into English, and extending the front end, and specifically the translator, with an inference engine.Clearly. in the longer term, database front ends will have to be provided with an inference capability. As Konolige points out, in attempting tc insulate users, with their particular and varied views of the domain cf discourse, from the actual administrative organisatlon cf the database, it may be necessary to do an arbitrary amcunt cf inferenclng exploiting domain informaticn to connect the user's question with the database. An obvious problem ~r~th front ends not clearly separating different processing stages is that it may be difficult to handle inference in a coherent and ccntrclled way. Insofar as inference is primarily domain-based, it seems natural in a modular front end to provide an inference capability as an extension of the translator. This should serve bcth tc Iccaliae inference operations and to facilitate them because they can work on the partially-processed input question, However the inference engine requires an ex pllclt and well-crganised domain model, and specifically one which is rather more comprehensive than current data models, or than the rather infcrmal nonce ptual schema we have used tc dr i ve the translator.We hope to begin work on providing an inference capability in the near future, but it has to be reccgnised that even for the restricted task cf database access, it may prove impossible to confine inference operations to a single mcdule: dcing so would imply, for example, that compound nouns will generally only be partly interpreted in the analysis and extraction phases. Starting with inference limited to the translation mcdule is therefore | null | null | Main paper:
:
This paper describes a front end for natural language access to databases making extensive use of general, l~. domain-independent, semantic information for question interpretation.In the interests of portability, initial syntactic and semantic processing of a question is carried out without any reference to the database domain, and domain-dependent operations are confined to subsequent, comparatively straightforward. processing o£ the initial interpretation. The different modules of the front end are described, and the system's performance is illustrated by examples.Following the developmemt 0£ various front ends for natural language access to databases, it is now generally agreed that such a front end must utillse at least three different kinds of knowledge to accomplish its task: linguistic k~owledge, knowledge of the domain of discourse, and knowledge of the organlsational structure of the database. Thus broadly speaking, a user request to the database goes through three conceptually different forms: the output of linguistic analysis o£ the question, its representation in terms of the domain's conceptual schema, and its interpretation in the database access language. Early natural language front ends usually did not have a clearcut separation between the different stages of the process: for example LUNAR (Woods 1972 ) merged the domain model and the database model into one, and systems such as the early incarnation of LADDER (Hendrix et al 1978) and PLANES (Waltz 1978) made heavy use of semantic grammars with their domain-dependent lexicons ccmbinin8 linguistic kncwledge with domain knowledge and so merging the first two stages. None 0£ these systems, moreover, made any significant use of ~eneral, as opposed to domain-specific, semantic information.In an attempt to achieve portability from one database to another, mcst current systems adhere to a ~eneral framework (Konolige 1979) , which makes a clear distinction between the different processing phases and distinguishes the domain-dependent from the domaln-independent parts of the front end, and also domain operations from database management cperatlons. However semantic processing is still This work is supported by the U.K. Science and Engineering Research Council. 8t essentially driven by domain-dependent semantics. Linguistic processing is therefore primarily syntactic parsing, and relating general linguistic to specific domain knowledge within the framework of a modular front end takes the form of applying domain-dependent semantic processing to the output of the syntactic parser. This may be done in a slmple, minded way as in PHLIQAI (Bronnenberg et al 1979) and T~ (Damerau 1980) , or by providing hooks in the syntactic representation (domain-independent calls to semantic operators which will evaluate differently in dl£ferent contexts), as in DIALOGIC (Grosz et ai 1982) . In either case the usual unhappy consequence o£ separating syntactic and semantic processing, namely the hassle of manipulating alternative syntactic trees, follows. Furthermore, changlngdomalns implies changing the definitions of the semantic operators, which are procedural in nature, while it may be preferable to keep the domain-dependent parts of the front end in declarative form, as is indeed done in (Warren and Pereira 1981) .Thus in systems of this by now conventional type, the 'portability' achieved by confining the necessary domain-dependent semantic processing to welldefined modules is purchased at the heavy price of limiting the early linguistic processing to syntax, and, perhaps, some very global and undiscriminating semantics (see for example the sccping algorithm of (Grosz et al 1982)).Our objective is to do better than this by making more use of powerful, but still non-domain-dependent semantics in the front-end linguistic analysis. Doing this should have two advantages: restraining syntax, and providing a good platform for domaindependent semantic processing. However, the overall architecture of the front end still follows the Konolige model in maintaining a clearcut separation between the different kinds of knowledge to be utilised, keeping the bulk of the domain-dependent knowledge in declarative form, and attempting to minimlse the consequences of changes in the front end environmant, whether of domain or database model, to promote s~ooth transfers cf the front end from one back end database management system to another.We believe that there is a lot of mileage to be got from non-task-specific semantic analysis of user requests, because their resulting rich, explicit, and ncrmalised meaning representations are a ~ccd starting point for subsequent task-specific operations, and specificall~ are better than either syntax trees, or the actual input text of e.g. the PLANES approach. Furthermore, since the domain world is (in some sense) a subset of the real world, it is possible to interpret descriptions of it using the same semantic apparatus and representation language as is used by the natural language analyser, which should allow easy and reliable linking of the natural language input words, domain world objects and relationships and data language terms and expressions. Since the connections between these do not appear hard-wired in the lexicon, but are established on the basis of matching rich semantic patterns, no changes at all should be required in the lexicon as the application moves from one domain or database to another, only expansions to allow for the semantic definitions of new words relevant to the new application.The approach leads to an overall front end structure as follows: Each process in the diagram above operates cn the output of the previous one. Processes I and 2 constitute the analysis phase, and processes 3 and the translation phase. Such a system has essentially been constructed, and is under active test; a detailed acccunt cf its components and operations follows.For the purposes of illustration we shall use questions addressed to the Suppliers and Parts relational database of (Date 1977). This has three relaticns with the following structure: Supplier(Snc, Shame, Status, Scity), Part(Pno, Pname, Colour, Weight, Pcity), and Shipments(Sno, Pnc, Quantity).A. The Anal)metThe natural language anal l met has been described in detail elsewhere (Boguraev 1979) , (Boguraev and Sparck Jones 1982) , and only a brief summary will be presented here. It has been designed as a general purpose, domain-and task-independent language processor, driven by a fairly extensive llnguistlcally-motivated grammar and controlled in its operation by variegated application cf a rich and powerful semantic apparatus. Syntacticallycontrolled constituent identification is coupled with the Judgemental application cf semantic specialists:following the evaluation of the semantic plausibility of the constituent at hand, the currently active processor either aborts the analysis path or constructs a meaning representation for the textual unit (noun phrase, ccmplementiSero embedded clause, etc.) for incorporation into any larger semantic construct. The philosophy behind the anal yser is that syntactlcally-drlven analysis (which is a major prerequisite for domain-and/or task-independence) is made efficient by frequent and timely calls to semantic specialists, which both control blind syntactic backtracking and construct meaning representations for input text without going through the potentiall y costly enumeration of intermediate syntactic trees. The analyser can therefore operate smoothly in environments which are syntactically or lexically hlghiy ambiguous.To achieve its objectives the program pursues a passive parsing strategy based on semantic pattern matching of the kind proposed by (Wilks 1975) . Thus the semantic specialists work with a range of patterns referring to narrower or broader word classes, all defined using general semantic primitives and ultimately depending on formulae which use the primitives to characterise individual word senses. However the application of patterns in the search for input text meaning is mcre effectively controlled by syntax in this system than in Wilks'.The particular advantages of the approach in the database application context are the powerful and flexible means of representing linguistic and world knowledge provided by the semantic primitives, and the ease with which 'traps for the unexpected' can be procedurally encoded. The latter means that the system can readily deal with the kinds cf problems generated by unconstrained natural language text which provoke untoward 'ripple' effects when large semantic grammars are mcdified. For present purposes, the form and ccntent cf the outputs of the natural language analyser are more important than the means by which they are derived (for these see Boguraev and Sparck Jones 1982). The meaning representations output by the analyser are dependency structures with clusters of case-labelled components centred around main verb or noun elements. Apart from the structure of the dependency tree itself, and group identifying markers like 'ins' and 'modallty', the substantive information in the meaning representation is provided by the case labels, which are drawn from a large set of semantic relation primitives forming part of the overall inventory of primitives, and by the semantic category primitive characterisations of lexicallyderived items.The formulae charaoterislng word senses may be quite rich. The fairly straightforward characterisation of 'supplier1', representing one sense of "supplier" is (Supplier ...( supplier 1 (~(ee~t obJe) give) (subJ CorK)) ...), meaning approximately that some sort of organisatton (which may reduce to an individual) gives entities. The meaning representation for the whole sentence "Suppliers live in cities" (with the formulae for individual units abbreviated, for space reasons, to their head primitives) is( el ause ........ (v (livel ... be I @@agent (n (supplierl ... am))) ee~oca~ion (n (city2 ... spread)))))), where ~and @location are case labels. "The parts are coloured red" will be analysed as( el ause ...... (v (be2 ... be thin in tpartl ... mennK)))yl(@@number (@~state ~:~ <colourl ... sign) (val (red1 ... sense))))))), and "Who supplies green parts?" will give rise to the structure:(clause ... (type question) (v (supplyl ... 81ve (@@agent (n (query (d~y)))) ~race (clause V agent)) (clause (v (be2 ... be (@@@gent £n <partl ... ~InS))) (@@state (st (eolourl ... sign) (gr, eenl ... , tsee ~.se))))))))))))).As these examples sho~ the anal yser's representations combine expressive power with structural simplicity. Further, the power of the semantic category primitives used to identify text message patterns means that it is possible to achieve far mcre semantic analysis cf a question, far earlier in the frcnt end processing, than can be achieved with frcnt ends conforming tc the Koncllge model. The effectiveness cf the anal yser as a general natural-language prccesslng device has been demcnstrated by its successful application to a range of natural language processing tasks. There is, however, a price to pay, in the database context, for its generality. Natural language makes ocn=acn use of vague concepts ("have", "do"), almost content-empty markers ("be e, "of"), and opaque constructions such as compound nouns. Clearl~ front ends where domainspecific information can provide leverage in interpreting these input text items have advantages. and it is not clear how a principled solution to the problems they present can be achieved within the framework of a general-purpose anal yser of the kind described.To provide a domain-specific interpretation of, for example, compounds like "supplier city", an interface would have to be provided oharaeterising domain k~owledge in the semantic terms familiar to the parser, and guaranteeing the provision of explicit structural charaoterlsations of the text constituent which would be available for further exploitation by the parser.To avoid invoking domain knowledge in this way in analysis we have been obliged to accept questicn interpretations which are incomplete in limited respects. That is, we push the ordinary semantic analysis procedures as far as they will go, accepting that they may leave 'dummy' markers in the dependency structure and compound nominals with ambiguous member words and no explicit extracted structure. though not yet domain-and databaseoriented, processing. Imposing domain world and database organisatlon restrictions on the question at this stage would be premature, since it cculd ecmplloate or even inhibit possible later inference operations. The idea cf providing a system ccmponent addressing a general linguistic task, withcut throwing away any detailed information not in fact needed for scme specific instance cf that task, like natural language distinctions between quantifiers ignored by the database system, is also an attractive one.The extractor thus emphasises the fact that the input text is a questicn, but carries the detailed semantic information provided by the analyser forward fcr exploitation in the translation phase cf the processing.A gccd way to achieve a question formulation abstracted from the low-level crganisaticn cf the database is to interpret the user's input as a formal quer~ However our extractor, unlike the equivalent processors described by (Wocds 1972 The logic representation of the question which is output by the extractor highlights the search aspects cf the input, formalising them so that the subsequent processes which will eventually generate the search specification for the database management system can locate and focus on them easily; at the same time, the semantic richness of the original meaning representation is maintained to facilitate the later domain-crlented translation operations.The syntax of the logic representation closely follc~ that defined by (Wocds 1978) :(For <quantifier> <variable> / <range> : <restrictions on variable> -<prcpcslticn> ),where each cf the restrictions, or the proposition, can themselves be quantified expressions. The rationale for such quantified expressions as media for questions addressed towards an abstract database has been discussed by Woods. As we accept this, we have developed a transformation procedure which takes the meaning representation of an input question and ccnstructs a corresponding logic representation in the form just described. Thus for the question "Who supplies green parts?" analysed in Section A, we obtain (For Every SVarl / query:(For Every $Var2 / part1 : (cclourl $Var2 8reenl) -(supply1SVarl SVar2)) (Display SVarl)).where the lexically-derived items indicating the ranges of the quantified variables ('query', 'part1'), the relationships between the variables ('supply1') and the predicates and predicate values ('cclcur1', 'green I') in fact carry along wltb them their semantic formulae: these are omitted here, and in the rest cf the paper, to save space.The extractor is geared to seek, in the analyser's dependent y structures, the simple prc positicns (atomic predications) which make up the logic representaticn.Follcwing the philcscphy cf the semantic thecry underlying the analyser design, these simple prcpositicns are identified wlth the basic messages, i.e. semantic patterns, which drive the parser and are expressed in the meaning representations it produces as verb and noun group clusters of case-related elements.In order to 'unpack' these, the extractor iccks for the sources cf atomic predicates as 'SVO' triples, identifiable by a verb (cr ncun) and its case rcle fillers, which can be extracted quite naturally in a straightforward way from the dependency structure.Depending bcth cn the semantic characterisaticn cf the verb and its case arguments, and cn the semantic context as defined by the dependency tree, the triples are categcrised as belcnging to cne cf two types:[$ObJ SLink $ObJ]. or [$Obj SPoss SPrcp].where the $Obj, SLink. or $Prcp items are further characterised in semantic terms. It is clear that the 'basic messages' that the extractor seeks to identify as a preliminary step tc ccnstructing the logic representation define either primitive relationships between objects, cr properties of those same cbjects. Thus the meaning representation for "part suppliers" will be unpicked as a 'dummy' relationship between "suppliers" and "parts", i.e. as[$ObJ1(supplierl) $Link1(dummy) $Obj2(partl)].while "green parts" will be interpreted as[$Obj2(part 1) $Poss(be2) SProp(colourl =green 1) ].Larger constructs can be similarly deocmpcsed: thus "Where do the status 32 red parts suppliers live?" will be broken down into the following set of triples:[$ObJl(supplierl) SLinkl(livel) $ObJS(query)] & [$Objl(supplierl) SLink2(dummy) $Ob~2(partl)] & [$Objl(supplierl) SPossl(be2) $Prcpl(status=32) ] & [$Obj2(partl) SPcss2(be2) $Prcp2(cclcurl=redl)J.It must be empbasised that while there are parallels between these structures and those of the entityattribute approach to data modelling, the forms cf triple were chosen without any reference to databases. As noted earlier, they naturally reflect the form of the 'atomic propositions', i.e. basic messages, used as semantic patterns by the natural language anal yser.For completeness, the triples underlying the earlier question "Who supplies green parts?" are[$Obj1(query=identity) $Llnkl(supplyl) $Ob32(partl)] & [$Obj2(part 1) $Possl(be2) $Prcpl(cclcurl=greenl)]The sets cf interconnected triples are derived from the meaning representations by a fairly simple recursive prccedure. The icgic representaticn defines the logical content and structure cf the information the user is seeking. It may, as ncted, be inccmplete at pcints where domain reference is required, e.g. in the interpretation cf compound ~cuns; but it carries along, tc the translator, the very large amcunt cf semantic information provided by the case labels and formulae of the meaning representation, which should be adequate to pinpoint the items sought by the user and tc describe them in terms suited to the database management system, so they may be accessed and retrieved.In the process of transforming the semantic content of the user's question into a low-level search representation geared to the administrative structure of the target database, it is necessary to reconcile the user's view of the world with the domain model. Before even attempting to construct, Say, a relational algebra expression to be interpreted by the back-end database management system, we must try to interpret the semantic content of the loKlc representation with reference to the se~emt cr variant of the real world modelled by the database.An obvious possibility here is to proceed directly from the variables and predications of the Icglc representation to their database counterparts. For example,svarl/supplierl (Bin) SVar2/partl (t~t~)) can be mapped directly onto a relation Shipments in the Suppliers and Parts database. The mapping could be established by reference to the lexicon and to a schedule of equivalences between logical and database structures. This approach suffers, however, from severe problems: the most important is that end users do not necessarily constrain their natural language to a highly limited vocabulary. Even in the simple context of the ~,ppliers and Parts database, it is possible to refer to "firms", "goods", "buyers", "sellers", "provisions", "customers", etc. In fact, it was precisely in order to bring variants under a common denominator that semantic grammars were employed. We, in contrast, have a more powerful, because more flexible, semantic apparatus at our disposal, capable of drawing out the similarities between "firms", "sellers", and "suppllers", as opposed to taking them as read. Thus a general semantic pattern which will match the dictionary definitions cf all of these words is (((neat obJm) give) (~bJ |org) ). Furthermore, if instead of attempting to define any sort of direct mapping between the natural language terms and expressions of the user and corresponding domain terms and expressions, we concentrate on finding the common links between them, we can see that even though the domain and, in turn, database terms and expression= may not mean exactly the same as their natural language relatives or sources, we should be able to detect overlaps in their semantic characterlsatlons. It is unlikely that the same cr similar words will be used in both natural and data languages if their meanings have ncthing in ccmmcn, even if they are not identical, so characterising each using the same repertoire of semantic primitives shculd serve to establish the link~ between the two. Thus, for example, one sense of the natural language word "iccaticn"will have the formula (this (where spread) ) and the data language word "&city" referring to the domain object &city will have the formula (((man folk) wrap) (wl~re spread)), which can be connected by the common constituent (~re spread).One distinctive feature of our front end design, the use of general semantics for initial question interpretation, iS thus connected with ancther: the more stringent requirements imposed on natural lanKusge to data language translation by the initial unconstrained question interpretation can be met by exploiting the resources for language meaning representation initially utilised for the natural language question interpretation. We define the domain world modelled by the database using the same semantic apparatus as the one used by the natural language front end processor, and invoke a flexible and sophisticated semantic pattern marcher tc establish the connection between the semantic content of the user question (which is carried over in the logic representation) and related ccncepts in the domain world. Taking the next step from a domain world concept or relationship between domain world obJants to their direct model in the administrative structure of the database is then relatively easy.Since the domain world is essentially a closed world restricted in sets if not in their members, it is possible to describe it in terms of a limited set of concepts and relationships: we have possible properties of objects and potential relationships between them. We can talk about &suppliers and &parts and the important relationship between them, namely that &suppliers &supply &parts. We can also specify that &suppliers &llve in &cities, &parts can be &n,-bered, and so on.We can thus utillse, either explicitly or implicitly, a description of the domain world which could be represented by dependency structures llke those used for natural language. The important point about these is the way they express the semantic content of whole statememts about the domain, rather than the way they label individual domaln-referrlng terms as, e.g. "&supplier" or "&part". It is then easy to see how the logic representation for the question "What are the numbers of the status 30 suppliers?", name1 y (For Every Syarl./suppllerl :(statusl $Varl 30) - (Dlap~ay tnum~rl $Varl))),can be unpacked by semantic pattern matching routines to establish the ccnnecticn between "supplier 1" and "&supplier", "number 1" and "&number", and so on. In the same way the lcgic representations for "From where does Blake operate?" and "Where are screws found?" can be analysed for semantic content which will establish that "Blake" is a &supplier, "operate" in the context cf the database domain means &supply, and "where" is a query marker acting fcr &city from which the &supplier Blake &supplies (as opposed to street corner, bucket shop, or crafts market); similarly, "screW' is an instance of &part and the cnly iccational information associated with &parts in the database in question is the &city where they are stored. All this becomes clear simply by matching the underlying semantic primitive definitions of the natural language and domain world words, in their propositional contexts.The translator is alac the module where domain reference is brought in tc complete the interpretation cf the input question where this cannot be fully interpreted by the analyser alcne. (&live $Var1(&supplier) $Var3(&clty)), while translating the logic representation for the example question "Who supplies green parts?" gives the query representation (For Every SVarl/&suppller:(For Every $Var2/&part : (&cclour iVar2Kreen) -(&supply $Varl SVar2)) - (Display $Varl)).Apart from the fact that semantic pattern matching seems to cope quite successfully with unexpected inputs ('unexpected' in the sense that in the alternative approach nc mapping function would have been defined for them, thus implying a failure to parse and/or interpret the input question), having a general natural language analyser at our disposal offers an additional bonus: the description of the domain world in terms of semantic primitives and primitive patterns can be generated largely automatically, since the domain world can be described in natural language (assuming, of course, an apprcpriate lexicon of domain world Words and definitions) and the descriptions simply analysed as utterances, producing a set of semantic structures which can subsequently be prccessed to cbtaln a repertoire of domain-relevant forms to be exploited fcr the matching procedures.Having identified the domain . terms and expressions, we have a high-level database equivalent cf the original English question. A substantial amcunt cf processing has pinpointed the question focus, has eliminated potential ambiguities, has resolved domain-dependent language ccnstructicns, and has provided fillers for 'dummy' or 'query' items. Further, the system has established that "London" is a &city, for example, cr that "Clark" is a specific instance of &supplier. The processing now has to make the final transition to the specific fcrm in which questions are addressed to the actual database management system. The semantic patterns cn which the translator relies, for example defining a domain word "&supplier" as (((cent obje) give) (subJ IorK)), while adequate encugh tc deduce that Clark is a &supplier, are not informative enough to suggest how &suppliers are modelled in the actual database.Again, the cbvious approach to adopt here is the mapping one, so that, for instance, we have: &supplier :=> relation Supplier Clark ==> tuple of relation Supplier such that Shames"Clark"But this approach suffers from the same limitations as direct mapping from logic representation tc search representation; and a mcre flexible apprcach using the way the database mcdels the domain world has been adopted.In the previous section we discussed how the translator uses an inventory of semantic patterns to establish the connection between natural language and domain world words. This inventory is not, however, a flat structure with no internal organisatlon.On the ccntrar~ the semantic information about the domain world is crganised in such a way that it can naturally be associated with the administrative structure cf the target database, For example in a relational database, a relation with tuples over domains represents properties of. cr relationships between, the objects in the domain world. The objects, properties and relationships are described by the semantic apparatus used for the translator, and as they also underlie, at not toc great remove, the database structure, the domain world concepts or predications of the query representation act as pointers into the data structures cf the database administrative crganlsatlon.For example, given the relation supplier over the domains S~ame, Snc. Status and Scity. the semantic patterns which describe the facts that in the domain world &suppliers &have &status, &numbers, &names and &live in &cities are crcsslinked, in the sense that they have the superstructure cf the database relation .Supplier imposed over them. We can thus use them to avoid explicit mapping between query data references and template relaticnal structures for the database. From the initial meaning representation for the question fragment "... Clark, who has status 30 ..." through to the query representation, the semantic pattern matching has established that Clark is an instance cf &supplier, that the relationship between the generic &supplier and the specific instance of &supplier (i.e. Clark) is that cf &name, and that the query is focussed cn his &status (whose value is supplied explicitly). Now from the position of the query predication (&status &supplier 30) in the characterisaticn cf the relaticn Supplier, the system will be able tc deduce that the way the target database administrative structure models the question's semantic ccntent is as a relation derived from Supplier with "Clark" and "30" as values in the columns Shame and Status respectlvely.The convertor thus employs declarative knowledge about the database organisaticn and the correspondence between this and the domain world structure to derive a generalised relational algebra expression which is an interpretation cf the formal query in the context of the relational database model of the domain. We have chosen to gear the convertor towards a generallsed relational algebra expression, because both its simple underlying definition and the generality of its data structures within the relational model allow easy generation of final low-level search representations for different specific database access systems.To derive the generallsed relational algebra form of the question from the query representation, the convertor uses its k~owledge of the way domain objects and predications are modelled in the database to establish a primary or derivable relation for each of the'quantifled variables of the query representation. These constituents of the algebra expression are then combined, with an appropriate sequence of relational operators, to obtain the complete expression.The basic premise of the convertor is that every quantified variable in the formal representation can be associated with some primary or computable relation in the target database; restrictions on the quantified variables specify how, with that relation as a starting point, further relational algebra computations can be performed to mcdel the restricted variable; the process is recurslve, and as the query representation is scanned by the convertor, variables and their associated relational algebra expressions are bound by an 'environmemttype' mechanism which provides all the necessary information to 'evaluate' the propositions of the quer~ Thus ccnverslon is evaluating a predicate expression in the context of its semantic interpretation in the domain ~rld and the envlronmemt of the database • models for its variables.For example, given the query representation fragment for the phrase "... all London suppliers who supply red parts ..", namely (For Every SVarl/&supplier :(AND (For The $Var3/London -(&live SVarl SVar3)) (For Every SVar2/&part : (&cclcur SVar2 red) -(&supply $VarlSVar2))) .... SVarl will initially be bound to the primary relation .Supplier, which will be subsequently restricted to those tuples Where Sctty is equal to "London". Slmllarl~ $Var2 will be associated with a partial relation derived from Part, for which the value of Colcur is "red". Evaluating the prcposltion (&supply SVarl $Var2). whose dcmain relationship Is mcdelled in the database by Shipments, will in the envlrcnment of $Varl and SVar2 yield the relational expression (jcin I select .Suppller where Seity equals "London") j91n Shlpmen~s ~select Part where Colcur equals "red"))).At this point, the information that the user wants has been described in terms of the target relational database: names cf files, fields and columns. The search description has, however, still to be given the specific form required by the back-end database management system. This is achieved by a fairly straightforward application of standard ccmplling techniques, and does not deserve detailed discussicn here. At present we can generate search specifications in three different relational search languages. Thus the final form in the local search language Salt of the example question "Who supplies green parts?" is list (Part:Colour="green"• (Supplier • Shipments)) 87 V IMPLEMENTATIONAll of the modules have been implemented (in LISP). The convertor is at present restricted to relational databases, and we would like to extend it to other models. The system has so far been tested cn Suppliers and Parts, which is a toy database from the point of view of scale and complexity, but which is rich enough to allow questions presenting challenges tO the general semantics approach to question interpretation.To illustrate the performance of the front end. we show below the query representations and final search representations for some questions addressed to this database. Work is currently in progress to apply the front end to a different (relational) database containing planning information: this simulates IBM's TQA database (Damerau 1980). Most of the work in this is likely to come in writing the lexical entries needed for the new vocabulary. Longer term developments include validating each step of the translation by generating back into English, and extending the front end, and specifically the translator, with an inference engine.Clearly. in the longer term, database front ends will have to be provided with an inference capability. As Konolige points out, in attempting tc insulate users, with their particular and varied views of the domain cf discourse, from the actual administrative organisatlon cf the database, it may be necessary to do an arbitrary amcunt cf inferenclng exploiting domain informaticn to connect the user's question with the database. An obvious problem ~r~th front ends not clearly separating different processing stages is that it may be difficult to handle inference in a coherent and ccntrclled way. Insofar as inference is primarily domain-based, it seems natural in a modular front end to provide an inference capability as an extension of the translator. This should serve bcth tc Iccaliae inference operations and to facilitate them because they can work on the partially-processed input question, However the inference engine requires an ex pllclt and well-crganised domain model, and specifically one which is rather more comprehensive than current data models, or than the rather infcrmal nonce ptual schema we have used tc dr i ve the translator.We hope to begin work on providing an inference capability in the near future, but it has to be reccgnised that even for the restricted task cf database access, it may prove impossible to confine inference operations to a single mcdule: dcing so would imply, for example, that compound nouns will generally only be partly interpreted in the analysis and extraction phases. Starting with inference limited to the translation mcdule is therefore
Appendix:
| null | null | null | null | {
"paperhash": [
"grosz|dialogic:_a_core_natural-language_processing_system",
"warren|an_efficient_easily_adaptable_system_for_interpreting_natural_language_queries",
"waltz|an_english_language_question_answering_system_for_a_large_relational_database",
"hendrix|developing_a_natural_language_interface_to_complex_data",
"scha|semantic_grammar:_an_engineering_technique_for_constructing_natural_language_understanding_systems",
"wilks|an_intelligent_analyzer_and_understander_of_english",
"date|an_introduction_to_database_systems"
],
"title": [
"DIALOGIC: A Core Natural-Language Processing System",
"An Efficient Easily Adaptable System for Interpreting Natural Language Queries",
"An English language question answering system for a large relational database",
"Developing a natural language interface to complex data",
"Semantic grammar: an engineering technique for constructing natural language understanding systems",
"An intelligent analyzer and understander of English",
"An Introduction to Database Systems"
],
"abstract": [
"The DIALOGIC system translates English sentences into representations of their literal meaning in the context of an utterance. These representations, or \"logical forms,\" are intended to be a purely formal language that is as close as possible to the structure of natural language, while providing the semantic compositionality necessary for meaning-dependent computational processing. The design of DIALOGIC (and of its constituent modules) was influenced by the goal of using it as the core language-processing component in a variety of systems, some of which are transportable to new domains of application.",
"This paper gives an overall account of a prototype natural language question answering system, called Chat-80. Chat-80 has been designed to be both efficient and easily adaptable to a variety of applications. The system is implemented entirely in Prolog, a programming language based on logic. With the aid of a logic-based grammar formalism called extraposition grammars, Chat-80 translates English questions into the Prolog subset of logic. The resulting logical expression is then transformed by a planning algorithm into efficient Prolog, cf. \"query optimisation\" in a relational database. Finally, the Prolog form is executed to yield the answer. On a domain of world geography, most questions within the English subset are answered in well under one second, including relatively complex queries.",
"By typing requests in English, casual users will be able to obtain explicit answers from a large relational database of aircraft flight and maintenance data using a system called PLANES. The design and implementation of this system is described and illustrated with detailed examples of the operation of system components and examples of overall system operation. The language processing portion of the system uses a number of augmented transition networks, each of which matches phrases with a specific meaning, along with context registers (history keepers) and concept case frames; these are used for judging meaningfulness of questions, generating dialogue for clarifying partially understood questions, and resolving ellipsis and pronoun reference problems. Other system components construct a formal query for the relational database, and optimize the order of searching relations. Methods are discussed for handling vague or complex questions and for providing browsing ability. Also included are discussions of important issues in programming natural language systems for limited domains, and the relationship of this system to others.",
"Aspects of an intelligent interface that provides natural language access to a large body of data distributed over a computer network are described. The overall system architecture is presented, showing how a user is buffered from the actual database management systems (DBMSs) by three layers of insulating components. These layers operate in series to convert natural language queries into calls to DBMSs at remote sites. Attention is then focused on the first of the insulating components, the natural language system. A pragmatic approach to language access that has proved useful for building interfaces to databases is described and illustrated by examples. Special language features that increase system usability, such as spelling correction, processing of incomplete inputs, and run-time system personalization, are also discussed. The language system is contrasted with other work in applied natural language processing, and the system's limitations are analyzed.",
"One of the major stumbling blocks to more effective used computers by naive users is the lack of natural means of communication between the user and the computer system. This report discusses a paradigm for constructing efficient and friendly man-machine interface systems involving subsets of natural language for limited domains of discourse. As such this work falls somewhere between highly constrained formal language query systems and unrestricted natural language under-standing systems. The primary purpose of this research is not to advance our theoretical under-standing of natural language but rather to put forth a set of techniques for embedding both semantic/conceptual and pragmatic information into a useful natural language interface module. Our intent has been to produce a front end system which enables the user to concentrate on his problem or task rather than making him worry about how to communicate his ideas or questions to the machine.",
"The paper describes a working analysis and generation program for natural language, which handles paragraph length input. Its core is a system of preferential choice between deep semantic patterns, based on what we call “semantic density.” The system is contrasted: with syntax oriented linguistic approaches, and with theorem proving approaches to the understanding problem.",
"From the Publisher: \nFor over 25 years, C. J. Date's An Introduction to Database Systems has been the authoritative resource for readers interested in gaining insight into and understanding of the principles of database systems. This revision continues to provide a solid grounding in the foundations of database technology and to provide some ideas as to how the field is likely to develop in the future.. \"Readers of this book will gain a strong working knowledge of the overall structure, concepts, and objectives of database systems and will become familiar with the theoretical principles underlying the construction of such systems."
],
"authors": [
{
"name": [
"B. Grosz",
"Norman Haas",
"G. Hendrix",
"Jerry R. Hobbs",
"P. Martin",
"Robert C. Moore",
"Jane J. Robinson",
"S. Rosenschein"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"D. Warren",
"Fernando C Pereira"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"D. Waltz"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"G. Hendrix",
"E. Sacerdoti",
"Daniel Sagalowicz",
"Jonathan Slocum"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"R. J. H. Scha"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"Y. Wilks"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"C. J. Date"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
}
]
}
],
"arxiv_id": [
null,
null,
null,
null,
null,
null,
null
],
"s2_corpus_id": [
"11289202",
"2498523",
"18227465",
"15391397",
"263227606",
"5968738",
"227993896"
],
"intents": [
[],
[],
[],
[],
[],
[],
[]
],
"isInfluential": [
false,
false,
false,
false,
false,
false,
false
]
} | Problem: The paper aims to develop a front end for natural language access to databases that utilizes general, domain-independent semantic information for question interpretation.
Solution: The hypothesis proposes that by incorporating powerful, non-domain-dependent semantics in the front-end linguistic analysis, the system can improve syntax restraint and provide a robust platform for domain-dependent semantic processing, while maintaining a clear separation of different types of knowledge required for database access. | 504 | 0.041667 | null | null | null | null | null | null | null | null |
b5b636539ec449baf3343347ef14c3a78587c131 | 530268 | null | Natural Language Text Segmentation Techniques Applied to the Automatic Compilation of Printed Subject Indexes and for Online Database Access | The nature of the problem and earlier approaches to the automatic compilation of printed subject indexes are reviewed and illustrated. A simple method is described for the de~ection of semantically self-contained word phrase segments in title-like texts. The method is based on a predetermined list of acceptable types of nominative syntactic patterns which can be recognized using a small domain-independent dictionary. The transformation of the de~ected word phrases into subject index records is described. The records are used for ~he compilation of Key Word Phrase subJec= indexes (K~PSI). The me~hod has been successfully tested for the fully automatic production of KWPSI-type indexes to titles of scientific publications. The usage of KWPSI-type display forma~s for the~enhanced online access to databases is also discussed. | {
"name": [
"Vladutz, G."
],
"affiliation": [
null
]
} | null | null | First Conference on Applied Natural Language Processing | 1983-02-01 | 8 | 2 | null | The nature of the problem and earlier approaches to the automatic compilation of printed subject indexes are reviewed and illustrated. A simple method is described for the de~ection of semantically self-contained word phrase segments in title-like texts. The method is based on a predetermined list of acceptable types of nominative syntactic patterns which can be recognized using a small domain-independent dictionary.The transformation of the de~ected word phrases into subject index records is described. The records are used for ~he compilation of Key Word Phrase subJec= indexes (K~PSI). The me~hod has been successfully tested for the fully automatic production of KWPSI-type indexes to titles of scientific publications.The usage of KWPSI-type display forma~s for the~enhanced online access to databases is also discussed.Printed subject indexes (SI), such as back-of-the-book indexes and indexes to periodicals and abstracts journals remain important as the most common tools for information retrieval.Traditionally SI are compiled from subject descriptions produced for this purpose by human indexers. Such subject descriptions are usually nominalized sentences in which the word order is chosen to emphasize as theme one of the objects participating in the description; the corresponding word or word phrase is placed at the beginning of the nominative construction.Furthermore, the nominalized sentence is rendered in a specially transformed ('articulated') way involving the separated by commas display of component word phrases together with the dominating prepositions; e.g. the sentence 'In lemon juice lead (is) determined by atomic absorption spectrometry' becomes 'LEMON JUICE, lead determination in, by atomic absorption spectroscopy.' Such rendering enhances the speedy understanding of the descriptions when browsing the index. At the same time it creates for the subject ~escription a llneary ordered sequence of focuses which can be used for the hierarchical multilevel grouping of related sets of descriptions.The main focus (theme) serves for the grouping of descriptions under a corresponding subject heading, the secondary focuses make possible the further subdivision of such group by subheadings. This is illustrated on the SI fragment to "Chemical Abstracts" shown on figure [. Fragment of a subject index of traditional type to "Chemical Abstracts," compiled from subject descriptions by human indexers. A text processing problem, studied in connection with the compilation of such si of traditional type, was the automatic transformation of subject descriptions for selecting their different possible themes and focuses (Armitage, 1967) . An experimental procedure, not yet implemented, takes as input pre-edited subject descriptions (Cohen, 1976) .Since the generauion of subjec= descrip=ions by human indexers is a very expensive procedure P. Luhn (1959) of IBM has suggested replacing subject descriptions by titles provided by the publication's authors. Using only a 'negative' dictionary of high frequency words excluded from indexing, he designed a procedure for the automatic compilation of listings where fragments of titles are displayed repeateuly for all their lndexable words, These words are alphabetized and displayed on the printed page in the central position of a column; their contextual fragments are sorted according to the right-hand side COntextS of the index words. Such listings, called Key-Word-ln-Context (ENIC) indexes, have been produced and successfully marketed since 1960 "quick-a~d-dirty" $I, despite their 'mechanical' appearance which makes them difficult enough to read and browse. A fragment of KNIC index to "Biological Abstracts," featuring titles enriched by addition~l key words is shown in figure 2. Another mechanically compiled SI substitute still in current use is based on a similar idea and simply groups together all the titles concaining a same indexable word. Such Key-Word-out-of-Context (~WOC) indexes display the full texts of titles under a common beading. Figure 3 shows a KWOC sample generated from title-Like subject descriptions ac the Institute for Scientific Information, The appearance of KWOC indexes is more steep,abe but their bro~elng is much hindered by the lack of articulation of the Lengthy subject descriptions (titles). Without proper articulation, the recognition of the context immediately relevant to the index word becomes too slow.In 1966 the Institute for Scientific Information (ISI) introduced a different type of automatically compiled subject index called PERMUTERM Subject Index (PSI) OGarfieid, 19760, which at present is the main type of SI to the Science Cltation Index and other similar ISI publications.Two different negative dictionaries are used for producing ~his SI: a so called "full stop list" of words excluded from becoming headings as well as from being used as subordinate index entries, and a "semi-stop List" of ~ords of little informative value, which are noC allowed as headings but are used as index entries along with words ~ound neither in the full-stop uor in ~he semi-stop Lists. ~n the PSI every word Co-occuring wi~h the heading word in some PSI has the unique ability to make possible the easy retrieval of all titles containing any given pair of informative words.This ability is similar to the ability of computerized online search systems to retrieve titles by any boolean combination of search terms.The corresponding PSI ability is available to PSI users who have been instructed about the principles used for compiling it. The naive user is more likely to utilize it as a browsing tool. When doing so, he may be inclined to perceive the subordinate word entries as being the immediate context of the headings.Used as a browsing tool, PSI may deliver relatively high percentage of false drops because of the lack of contextual information. Another shortcoming of the PSI is its relatively high cost due to its significant size which is proportional to the square of the average length of titles. The large number of entries subordinated to headings which are words of relatively high frequency makes the exhaustive scanning of entries under such headings a time consumLing procedure.An important advantage of all the above computer generated indexes over their manually compiled counterparts is the speed and essentially lower cost at which they are made available.All the above compilation procedures are based exclusively on the most trivial facts concerning the syntaxis and semantics of natural languages. They make use of the fact that texts are built of words, of the existence of words having purely syntactic functions and of the existence of lexlcal units of very little informative value.A common disadvantage versus the SI of traditional type is that the above procedures fall to provide articulated contexts which would be short enough and structurally simple enough to be easily 8rasped in the course of browsing.Certainly this problem can be solved by any systen which can perform the full syntactic analysis of titles or similar kinds of subject descriptions. From the syntactic tree of the title a brief articulated context can be produced for any given word of a title by detecting a subtree of suitable size which includes the given word.However, in the majority of cases the practical conditions of application of index compilation procedures are excluding the usage of full scale syntactic analysis, based on dictionaries containing the required morphological, syntactic and semantic information for all the lexical units of the processed input.For instance, ISI is processing annually for its mutlidisciplinary publications around 700,000 titles ranging in their subject orientation from science and technology to arts and humanities.The effort needed for the creation and maintenance of dictionaries covering several hundred thousands entries with a high ratio of appearance of new words would be excessive.Therefore, the automatic compilation of SI is practially feasible only on the basis of quite simplistic procedures based on "negative" dictionaries involving approximative methods of analysis which yield good results in ~he majority of cases, but are robust enough not to break down even in difficult cases.At one end of the range of problems involving natural language processsing are such as question answering which require a high degree of analytic sophistication and are based on a significant amount of domain dependent information formated in bulky lexicons.Such procedures appear to be applicable to texts dealing with rather narrow fields of knowledge in the same way as the high levels of iu-depth human expertise are usually limited to specific domains. On the other end of the spectrum are simple problems requiring much less domain dependent information and relatively low levels of "intelligence" (oefined as the ability to discuss comprehensive texts from gibberish); the corresponding procedures are usually applicable to wide categories of texts.For reasons explained above, we consider the problems of automatic compilation of subject indexes as belongin~ to this low end of the spectrum. | null | In this framework we developed an automatic procedure for the compilaclon of a SI based on :he detection and usage of word phrases.The earlier stages of development of this Key-Word-Phrase subject index (KW?SI) have been reported elsewhere (Vladutz 19795 .The procedure starts by detecting certain types of syntactically self-contained segments of the input text; such segments are expected to be semantically self-contained in view of the assumed well-formedness of the input.The segment detection procedure is based on a relatively short list of acceptable syntactic patterns, formulated in terms of markers attributable by a simple dictionary look-up. The markers are essentially ~he same as used in (Klein 1963) in the early days of machine translation for automatic grammatical coding of English words. All the words not found in an exlusion dictionary of "~ 1,500 words are assigned the two markers ADJ and NOUN.All the acceptable syntactic patterns are characterized in the frameworks of a generative gr=--,~r constructed for title-type texts.Sucn texts are described as sequences of segments of acceptable syntactic patterns separated by arbitrary filler segments whose syntactic pattern is different from the acceptable ones.The analysis procedure leading to the detection of acceptable segments was formulated as a reversal of the generative grammar and is performed by a right =o left scanning..~ew acceptable syntactic patterns can easily be incorporated into the generative grammar.It is envisaged to use in the future existing programs for automatically generating analysis programs from any specific variant of the grammar.The present list of acceptable syntactic patterns includes such patterns where noun phrases are concatenated by the preposition 'OF' anu the conjunctions 'AND', 'OR', 'AND/OR', as well as constructions of the type 'NPI, NP2, ... AND NPi'.Since no prepositions other than 'OF' and no conjunctions other than 'AND', 'OR', 'AND/OR' can occur in the acceptable segments the occurrences of other prepositions and conjunctions are used for initial delimitation of acceptable segments, but the detection procedure is not limited to such usage. In particular, a past participle or a group containing adve-bs followed by a past participle are excluded from the acceptable segment when preceuing an initial delimiter.The segmentation detection is illustrated for three titles in figure 5. Figure 5 .The detection of acceptable segments is shown for 3 titles.The words with all lowercase letters are prepositions and conjunctions used as initial deli~Liters.The words with only initial capital letters are "seml-stop" words, excluded from being used as index headings; the underscored by dotted lines "seml-stops" are past participles which become dellminters only when followed by initial dellmi~ars. The resulting multl-word phrases are underscored ~wice unlike the resultlng single word phrases which are underscored once.The first part of the system's dictionary con-Junctions, prepositions, articles, auxiliary verbs and pronouns.Th/s part is completely domain independent.A second part of the dictionary consists of nouns, adjectives, verbs, present and past participles, all of them of little informative value and, therefore, called "seml-stop" words. Such words will not be allowtd later to become SI headings.The semi-stop par~ of the dictionary is somewhat domain-dependent and has to be atuned for different broad fields of knowledge such as science and technology, social sciences or arts and humanities.The second logical step in the SI compilation involves the transformation of acceptable segments into index records consisting of an informative word (not found in the system's dictionary) displayed as heading llne and of an index llne providing some relevant context for the headlng word.Each multiword segment generates as many index records as many informative words it contains.The ri~ht-hand side of the segment following the heading word is placed at the beginning of the index line to serve as its i~nediate context and is followed through a senLicolon by the segment's left-hand side.When both sides are non-empty, an articulation of the index line is so achieved.In the case of a single word segment an "expansion" procedure is performed during index record generation.It starts by placing at the beginning of the index llne a fragment of the title consisting of the filler portion following the heading word and of the next acceptable segment, if any; this initial portion of the index line is followed by a semicolon after which follows the preceding acceptable segment, followed finally by the filler portion separating in the title this preceding segment from the heading word.The index record generation is illustrated in figure 6. final "enrichment" phase of the index record generatlon involves the additional display (in parenthesis) of the unused segments of the processed title. Figure 6 . The transformation of Key-Word-Phrases into subJec~ headlngs and subject entries is illmstrated for the first two seFjnents of the title A, Figure 6 . The last two examples snow how single word segments (from Title C) are expanded to incluoe the preceding and following them segemencs.As a result of this stage the informacioual value of the finally generated index record is almost equivalent to the information content of ~ne initial full title.The entire process ultimately boils down to the the reshuffling of some component segments of the initial title.The enric~ent stage of index record generation is illustrated on figure 7.The index records are alphabetized firstly by heading words and secondly by index lines with the exclusion from alphabetiza~ion of prepositions and conjunctions if they occur aC the beginning of index lines.During the photocomposition different parts of the index line are set usin~ different fonts. If in the original title the initial part of the index llne follows the head word i~nedlately this part is set in bold face italics, i.e. in the same font as the heading.The "inverted" part following the semicolon is set in light face roman letters.Finally the enrichment part of ~he index line, included in patens is always displayed in light-face italics.As a result the The enrichment of the subject entries by the display (in parenthesis) of the unused by them segments of the same title, illustrated for some of the entries of Figure 6 . immediately relevan~ coutext of the head word is displayed in bold face in order to facilitate its rapid grasping when browsing.Details of the appearance and s~ructure of KWPSI are exemplified in figure 8 on a sample compiled for titles of publications dealing with librarianship and information science.The general appearance of KWPSI is close enough to the appearance of SI of traditional type.For purposes of transportability the KWPSI system is programmed in ANSA COBOL.It includes two modules:the index generation module and the sorting and reformatting module.On an IBM 370 system index records are generated for titles of scholarly papers at a speed of ~ 70,000 titles/hour.The resulting total size of the index is of the same order as the size of KWOC indexes and compares favorably with the size of the PSI index.The analysis of ~he rates and ~aCure of failures of ~he segment detection algorithm shows that in 96% of cases the generated segments are fully acceptable as valuable index entries. In 2% of cases some important information is lost as a result of the elimination of prepositions, as in case of expressions of 'wood to wood' type. The rest of failures results in somehow awkward segments which are not completely semantically self-contained. Even in such cases the index entries retain some informative value.Around half of the failures can be eliminated by additions to the system's dictionary, especially by the inclusion of more verbs and past participles.Not counted aS failures are the 5% of cases when the leng=h of the detected segments is excessive; such segments can include the whole title.The extent of tuning required for =he application of the system in a new area of knowledge depends mainly upon ~he extent of figure 8. * ......................... (P~R~O~MANCE C(:~I~ARtSOIV) .................... IC.80"2 568 I~CAmULA~Y:FREE " ................. IC, 80, As a matter of fact all kinds of scholarly titles contain such deviations, as for instance portions of normal text included in parentheses or occurrences of mathematical or chemical symbols.We found only one case when the required tuning effort was siguifican~, namely the case of titles from ~he domain of arts and humanities.ISl's "Arts and Humanities Citation | The common method of online access to commercially available textual databases of both bibliographic and full cexc type is through boolean queries formulaCsd in tern~s of single words. AuCo~Icically detectable word phrases of the cype used in the KWP$1 sysCem could be used in ~L~ree different rays for improving online access.One extreme way would involve the creation of word phrases of the above cype a= the input stage for every informative word of the Input.In response to a slnsle word query a sequence of screens would be shown displaying the image of whaC in a printed KWPSl would be the KWPSI section under ~he Given word ta~eL~ as headin G . After browsing online some par~ o~ tni~ up-to-dace online 51 the user could choose to limi~ further browsin G by respondln~ wits an additional search term, mos~ likely chosen from some of t~le already examined index entries.As a result ~he system would reply by ellmlna~ing from ~ne displayed output the encrlas aoc concalnin 8 che given word an~ the user would conclnue co browse che so ~rimmed display.Several such i~eratlons could be performe~ until the user would be left with the display of a SI to relevant items of the database. This SI would be then printed together with the full list of relevant items.It is ~hought that such kind of interaction could be more user friendly than the currently used boolean mode.Another way of using the KWPSI technique in an online environment would be to use the KWPSI format for the output of the results of a retrieval performed in a traditional boolean way. The query word which achieved the most strong trimming effect would be used as heading.A third way would involve the compression of a KWFSI section under a given heading before it is displayed in response to a word.One could e.g. retain only such noun phrases containing the given word which occur at least k times in the database. An example of such list for the "Arts and Humanities" database is given in figure 12. By displaying such lists of words closely co-occurring with a given one ~he system would perform thesaurus-type functions.The implementation of all such possibilities would be rather difficult for any existin 8 system in view of the effort required to reprocess past input.Instead, after the input of a query word the corresponding full text records could be called a not processed online in core for generating KWPSl-type index records.In this case all the above functions could be still performed. Another possibility which we are considering is to place the K~PSI-type processing capabilities into a microcomputer which is being used to mediate online searches in remote databases.All the text records containin8 a given (not too frequent) word coulu be initially tapped from the database into the microcomputer.Following that the microcomputer could perform all the above functions in an offline mode, | null | Main paper:
the automatic compilation of key-word-phrase sub~ect indexes:
In this framework we developed an automatic procedure for the compilaclon of a SI based on :he detection and usage of word phrases.The earlier stages of development of this Key-Word-Phrase subject index (KW?SI) have been reported elsewhere (Vladutz 19795 .The procedure starts by detecting certain types of syntactically self-contained segments of the input text; such segments are expected to be semantically self-contained in view of the assumed well-formedness of the input.The segment detection procedure is based on a relatively short list of acceptable syntactic patterns, formulated in terms of markers attributable by a simple dictionary look-up. The markers are essentially ~he same as used in (Klein 1963) in the early days of machine translation for automatic grammatical coding of English words. All the words not found in an exlusion dictionary of "~ 1,500 words are assigned the two markers ADJ and NOUN.All the acceptable syntactic patterns are characterized in the frameworks of a generative gr=--,~r constructed for title-type texts.Sucn texts are described as sequences of segments of acceptable syntactic patterns separated by arbitrary filler segments whose syntactic pattern is different from the acceptable ones.The analysis procedure leading to the detection of acceptable segments was formulated as a reversal of the generative grammar and is performed by a right =o left scanning..~ew acceptable syntactic patterns can easily be incorporated into the generative grammar.It is envisaged to use in the future existing programs for automatically generating analysis programs from any specific variant of the grammar.The present list of acceptable syntactic patterns includes such patterns where noun phrases are concatenated by the preposition 'OF' anu the conjunctions 'AND', 'OR', 'AND/OR', as well as constructions of the type 'NPI, NP2, ... AND NPi'.Since no prepositions other than 'OF' and no conjunctions other than 'AND', 'OR', 'AND/OR' can occur in the acceptable segments the occurrences of other prepositions and conjunctions are used for initial delimitation of acceptable segments, but the detection procedure is not limited to such usage. In particular, a past participle or a group containing adve-bs followed by a past participle are excluded from the acceptable segment when preceuing an initial delimiter.The segmentation detection is illustrated for three titles in figure 5. Figure 5 .The detection of acceptable segments is shown for 3 titles.The words with all lowercase letters are prepositions and conjunctions used as initial deli~Liters.The words with only initial capital letters are "seml-stop" words, excluded from being used as index headings; the underscored by dotted lines "seml-stops" are past participles which become dellminters only when followed by initial dellmi~ars. The resulting multl-word phrases are underscored ~wice unlike the resultlng single word phrases which are underscored once.The first part of the system's dictionary con-Junctions, prepositions, articles, auxiliary verbs and pronouns.Th/s part is completely domain independent.A second part of the dictionary consists of nouns, adjectives, verbs, present and past participles, all of them of little informative value and, therefore, called "seml-stop" words. Such words will not be allowtd later to become SI headings.The semi-stop par~ of the dictionary is somewhat domain-dependent and has to be atuned for different broad fields of knowledge such as science and technology, social sciences or arts and humanities.The second logical step in the SI compilation involves the transformation of acceptable segments into index records consisting of an informative word (not found in the system's dictionary) displayed as heading llne and of an index llne providing some relevant context for the headlng word.Each multiword segment generates as many index records as many informative words it contains.The ri~ht-hand side of the segment following the heading word is placed at the beginning of the index line to serve as its i~nediate context and is followed through a senLicolon by the segment's left-hand side.When both sides are non-empty, an articulation of the index line is so achieved.In the case of a single word segment an "expansion" procedure is performed during index record generation.It starts by placing at the beginning of the index llne a fragment of the title consisting of the filler portion following the heading word and of the next acceptable segment, if any; this initial portion of the index line is followed by a semicolon after which follows the preceding acceptable segment, followed finally by the filler portion separating in the title this preceding segment from the heading word.The index record generation is illustrated in figure 6. final "enrichment" phase of the index record generatlon involves the additional display (in parenthesis) of the unused segments of the processed title. Figure 6 . The transformation of Key-Word-Phrases into subJec~ headlngs and subject entries is illmstrated for the first two seFjnents of the title A, Figure 6 . The last two examples snow how single word segments (from Title C) are expanded to incluoe the preceding and following them segemencs.As a result of this stage the informacioual value of the finally generated index record is almost equivalent to the information content of ~ne initial full title.The entire process ultimately boils down to the the reshuffling of some component segments of the initial title.The enric~ent stage of index record generation is illustrated on figure 7.The index records are alphabetized firstly by heading words and secondly by index lines with the exclusion from alphabetiza~ion of prepositions and conjunctions if they occur aC the beginning of index lines.During the photocomposition different parts of the index line are set usin~ different fonts. If in the original title the initial part of the index llne follows the head word i~nedlately this part is set in bold face italics, i.e. in the same font as the heading.The "inverted" part following the semicolon is set in light face roman letters.Finally the enrichment part of ~he index line, included in patens is always displayed in light-face italics.As a result the The enrichment of the subject entries by the display (in parenthesis) of the unused by them segments of the same title, illustrated for some of the entries of Figure 6 . immediately relevan~ coutext of the head word is displayed in bold face in order to facilitate its rapid grasping when browsing.Details of the appearance and s~ructure of KWPSI are exemplified in figure 8 on a sample compiled for titles of publications dealing with librarianship and information science.The general appearance of KWPSI is close enough to the appearance of SI of traditional type.For purposes of transportability the KWPSI system is programmed in ANSA COBOL.It includes two modules:the index generation module and the sorting and reformatting module.On an IBM 370 system index records are generated for titles of scholarly papers at a speed of ~ 70,000 titles/hour.The resulting total size of the index is of the same order as the size of KWOC indexes and compares favorably with the size of the PSI index.The analysis of ~he rates and ~aCure of failures of ~he segment detection algorithm shows that in 96% of cases the generated segments are fully acceptable as valuable index entries. In 2% of cases some important information is lost as a result of the elimination of prepositions, as in case of expressions of 'wood to wood' type. The rest of failures results in somehow awkward segments which are not completely semantically self-contained. Even in such cases the index entries retain some informative value.Around half of the failures can be eliminated by additions to the system's dictionary, especially by the inclusion of more verbs and past participles.Not counted aS failures are the 5% of cases when the leng=h of the detected segments is excessive; such segments can include the whole title.The extent of tuning required for =he application of the system in a new area of knowledge depends mainly upon ~he extent of figure 8. * ......................... (P~R~O~MANCE C(:~I~ARtSOIV) .................... IC.80"2 568 I~CAmULA~Y:FREE " ................. IC, 80, As a matter of fact all kinds of scholarly titles contain such deviations, as for instance portions of normal text included in parentheses or occurrences of mathematical or chemical symbols.We found only one case when the required tuning effort was siguifican~, namely the case of titles from ~he domain of arts and humanities.ISl's "Arts and Humanities Citation
possible usage o_ff automatically detected vord phrases for enhanced online access co (lacabeses:
The common method of online access to commercially available textual databases of both bibliographic and full cexc type is through boolean queries formulaCsd in tern~s of single words. AuCo~Icically detectable word phrases of the cype used in the KWP$1 sysCem could be used in ~L~ree different rays for improving online access.One extreme way would involve the creation of word phrases of the above cype a= the input stage for every informative word of the Input.In response to a slnsle word query a sequence of screens would be shown displaying the image of whaC in a printed KWPSl would be the KWPSI section under ~he Given word ta~eL~ as headin G . After browsing online some par~ o~ tni~ up-to-dace online 51 the user could choose to limi~ further browsin G by respondln~ wits an additional search term, mos~ likely chosen from some of t~le already examined index entries.As a result ~he system would reply by ellmlna~ing from ~ne displayed output the encrlas aoc concalnin 8 che given word an~ the user would conclnue co browse che so ~rimmed display.Several such i~eratlons could be performe~ until the user would be left with the display of a SI to relevant items of the database. This SI would be then printed together with the full list of relevant items.It is ~hought that such kind of interaction could be more user friendly than the currently used boolean mode.Another way of using the KWPSI technique in an online environment would be to use the KWPSI format for the output of the results of a retrieval performed in a traditional boolean way. The query word which achieved the most strong trimming effect would be used as heading.A third way would involve the compression of a KWFSI section under a given heading before it is displayed in response to a word.One could e.g. retain only such noun phrases containing the given word which occur at least k times in the database. An example of such list for the "Arts and Humanities" database is given in figure 12. By displaying such lists of words closely co-occurring with a given one ~he system would perform thesaurus-type functions.The implementation of all such possibilities would be rather difficult for any existin 8 system in view of the effort required to reprocess past input.Instead, after the input of a query word the corresponding full text records could be called a not processed online in core for generating KWPSl-type index records.In this case all the above functions could be still performed. Another possibility which we are considering is to place the K~PSI-type processing capabilities into a microcomputer which is being used to mediate online searches in remote databases.All the text records containin8 a given (not too frequent) word coulu be initially tapped from the database into the microcomputer.Following that the microcomputer could perform all the above functions in an offline mode,
:
The nature of the problem and earlier approaches to the automatic compilation of printed subject indexes are reviewed and illustrated. A simple method is described for the de~ection of semantically self-contained word phrase segments in title-like texts. The method is based on a predetermined list of acceptable types of nominative syntactic patterns which can be recognized using a small domain-independent dictionary.The transformation of the de~ected word phrases into subject index records is described. The records are used for ~he compilation of Key Word Phrase subJec= indexes (K~PSI). The me~hod has been successfully tested for the fully automatic production of KWPSI-type indexes to titles of scientific publications.The usage of KWPSI-type display forma~s for the~enhanced online access to databases is also discussed.Printed subject indexes (SI), such as back-of-the-book indexes and indexes to periodicals and abstracts journals remain important as the most common tools for information retrieval.Traditionally SI are compiled from subject descriptions produced for this purpose by human indexers. Such subject descriptions are usually nominalized sentences in which the word order is chosen to emphasize as theme one of the objects participating in the description; the corresponding word or word phrase is placed at the beginning of the nominative construction.Furthermore, the nominalized sentence is rendered in a specially transformed ('articulated') way involving the separated by commas display of component word phrases together with the dominating prepositions; e.g. the sentence 'In lemon juice lead (is) determined by atomic absorption spectrometry' becomes 'LEMON JUICE, lead determination in, by atomic absorption spectroscopy.' Such rendering enhances the speedy understanding of the descriptions when browsing the index. At the same time it creates for the subject ~escription a llneary ordered sequence of focuses which can be used for the hierarchical multilevel grouping of related sets of descriptions.The main focus (theme) serves for the grouping of descriptions under a corresponding subject heading, the secondary focuses make possible the further subdivision of such group by subheadings. This is illustrated on the SI fragment to "Chemical Abstracts" shown on figure [. Fragment of a subject index of traditional type to "Chemical Abstracts," compiled from subject descriptions by human indexers. A text processing problem, studied in connection with the compilation of such si of traditional type, was the automatic transformation of subject descriptions for selecting their different possible themes and focuses (Armitage, 1967) . An experimental procedure, not yet implemented, takes as input pre-edited subject descriptions (Cohen, 1976) .Since the generauion of subjec= descrip=ions by human indexers is a very expensive procedure P. Luhn (1959) of IBM has suggested replacing subject descriptions by titles provided by the publication's authors. Using only a 'negative' dictionary of high frequency words excluded from indexing, he designed a procedure for the automatic compilation of listings where fragments of titles are displayed repeateuly for all their lndexable words, These words are alphabetized and displayed on the printed page in the central position of a column; their contextual fragments are sorted according to the right-hand side COntextS of the index words. Such listings, called Key-Word-ln-Context (ENIC) indexes, have been produced and successfully marketed since 1960 "quick-a~d-dirty" $I, despite their 'mechanical' appearance which makes them difficult enough to read and browse. A fragment of KNIC index to "Biological Abstracts," featuring titles enriched by addition~l key words is shown in figure 2. Another mechanically compiled SI substitute still in current use is based on a similar idea and simply groups together all the titles concaining a same indexable word. Such Key-Word-out-of-Context (~WOC) indexes display the full texts of titles under a common beading. Figure 3 shows a KWOC sample generated from title-Like subject descriptions ac the Institute for Scientific Information, The appearance of KWOC indexes is more steep,abe but their bro~elng is much hindered by the lack of articulation of the Lengthy subject descriptions (titles). Without proper articulation, the recognition of the context immediately relevant to the index word becomes too slow.In 1966 the Institute for Scientific Information (ISI) introduced a different type of automatically compiled subject index called PERMUTERM Subject Index (PSI) OGarfieid, 19760, which at present is the main type of SI to the Science Cltation Index and other similar ISI publications.Two different negative dictionaries are used for producing ~his SI: a so called "full stop list" of words excluded from becoming headings as well as from being used as subordinate index entries, and a "semi-stop List" of ~ords of little informative value, which are noC allowed as headings but are used as index entries along with words ~ound neither in the full-stop uor in ~he semi-stop Lists. ~n the PSI every word Co-occuring wi~h the heading word in some PSI has the unique ability to make possible the easy retrieval of all titles containing any given pair of informative words.This ability is similar to the ability of computerized online search systems to retrieve titles by any boolean combination of search terms.The corresponding PSI ability is available to PSI users who have been instructed about the principles used for compiling it. The naive user is more likely to utilize it as a browsing tool. When doing so, he may be inclined to perceive the subordinate word entries as being the immediate context of the headings.Used as a browsing tool, PSI may deliver relatively high percentage of false drops because of the lack of contextual information. Another shortcoming of the PSI is its relatively high cost due to its significant size which is proportional to the square of the average length of titles. The large number of entries subordinated to headings which are words of relatively high frequency makes the exhaustive scanning of entries under such headings a time consumLing procedure.An important advantage of all the above computer generated indexes over their manually compiled counterparts is the speed and essentially lower cost at which they are made available.All the above compilation procedures are based exclusively on the most trivial facts concerning the syntaxis and semantics of natural languages. They make use of the fact that texts are built of words, of the existence of words having purely syntactic functions and of the existence of lexlcal units of very little informative value.A common disadvantage versus the SI of traditional type is that the above procedures fall to provide articulated contexts which would be short enough and structurally simple enough to be easily 8rasped in the course of browsing.Certainly this problem can be solved by any systen which can perform the full syntactic analysis of titles or similar kinds of subject descriptions. From the syntactic tree of the title a brief articulated context can be produced for any given word of a title by detecting a subtree of suitable size which includes the given word.However, in the majority of cases the practical conditions of application of index compilation procedures are excluding the usage of full scale syntactic analysis, based on dictionaries containing the required morphological, syntactic and semantic information for all the lexical units of the processed input.For instance, ISI is processing annually for its mutlidisciplinary publications around 700,000 titles ranging in their subject orientation from science and technology to arts and humanities.The effort needed for the creation and maintenance of dictionaries covering several hundred thousands entries with a high ratio of appearance of new words would be excessive.Therefore, the automatic compilation of SI is practially feasible only on the basis of quite simplistic procedures based on "negative" dictionaries involving approximative methods of analysis which yield good results in ~he majority of cases, but are robust enough not to break down even in difficult cases.At one end of the range of problems involving natural language processsing are such as question answering which require a high degree of analytic sophistication and are based on a significant amount of domain dependent information formated in bulky lexicons.Such procedures appear to be applicable to texts dealing with rather narrow fields of knowledge in the same way as the high levels of iu-depth human expertise are usually limited to specific domains. On the other end of the spectrum are simple problems requiring much less domain dependent information and relatively low levels of "intelligence" (oefined as the ability to discuss comprehensive texts from gibberish); the corresponding procedures are usually applicable to wide categories of texts.For reasons explained above, we consider the problems of automatic compilation of subject indexes as belongin~ to this low end of the spectrum.
Appendix:
| null | null | null | null | {
"paperhash": [
"garfield|the_permuterm_subject_index:_an_autobiographical_review",
"cohen|experimental_algorithmic_generation_of_articulated_index_entries_from_natural_language_phrases_at_chemical_abstracts_service",
"klein|a_computational_approach_to_grammatical_coding_of_english_words"
],
"title": [
"The permuterm subject index: An autobiographical review",
"Experimental Algorithmic Generation of Articulated Index Entries from Natural Language Phrases at Chemical Abstracts Service",
"A Computational Approach to Grammatical Coding of English Words"
],
"abstract": [
"The Permuterm Subject Index (PSI) section of the Science Citation Index (SCI) was designed more than ten years ago and has been published both quarterly and annually since 1966. There is, however, no ‘primordial’ citable paper about the PSI. It has been described and discussed from different standpoints in a number of papers (1,2), but none of them provides the formal description usually accorded a new bibliographic tool. This article is intended to provide such a reference point for future workers in information science. The PSI was designed in 1964 at the Institute for Scientific Information (1S1) by myself and Irving Sher, my principal research collaborator at the time. In the subsequent development of the PSI, contributions were also made by others, including Arthur W. Elias, who was then in charge of production operations at 1S1. In the early sixties we were too preoccupied with the task of convincing the library and information community of the value of citation indexing even to consider the idea of publishing a word index. But it was a logical development once we added the Source Zndex containing full titles. The value of the PSI as a ‘natural language’ index is now well recognized and exploited by its users, but this was not the original reason for its development. The PSI was developed as one solution to a problem commonly faced by uses of the Citation index section of the Science Citation Index (SCZ_). While the typical scientist-user could enter the Citation Index with a known author or paper, other users with a limited knowledge of the subject often lacked a starting point for their search. Before publication of the PSI, we told users whose unfamiliarity with subject matter left them doubtful about a starting point to consult an encyclopedia or the subject index of a book. If these failed, we told them to use another index, such as Chemical Abstracts, Biological Abstracts, Physics Abstracts or Index Medicus. Once the user identified a relevant older paper, it could be used to begin a search in the Citation Index. Users of the SC1—and librarians in particular needed some tool with which a starting point, or what used to be called a target reference, could be quickly and easily identified. In those days the information community was pre-occupied with KeyWord-in-Context (KWIC] indexes. The development of the KWIC index, which was subsequently vigorously marketed by IBM, undoubtedly had an enormous impact (3, 4, 5). But I was never happy with the KWIC system for a number of reasons. First, Sher and I felt that the KWIC index was highly uneconomical for a printed index. KWIC’S use of space is prodigious, and it can be extremely time-consuming to use in searches involving more than one term. 546",
"J. Lederberg, et al., “Applications of Artificial Intelligence for Chemical Inference. I. The Number of Possible Organic Compounds. Acyclic Structures Containing C, H, 0, and N”, J . Am. Chem. Soc., 91,2973-76 (1969). M. Milne, et al., “Search of CA Registry (1.25 Million Compounds) with the Topological Screens System”, J . Chem. Doc., 12, 183-9 (1972). D. Lefkovitz, “The Large Data Base File Structure Dilemma”, J. Chem. InJ Comput. Sci., l!;, 14-9 (1975). M. F. Lynch, et al., “Computer Handling of Chemical Information”, McDonald, London, and American Elsevier, New York, 1971, p 84. D. J. Gluck, “A Chemical Structure, Storate and Search System Development at DuPont”, J . Chem. Doc., 5 , 43-51 (1965). (13) Reference 11, p 91. (14) G. W. Adamson, et al., “Strategic Considerations in the Design of a Screening System for Substructure Searches on Chemical Structure Files”, J . Chem. Doc., 13, 153-7 (1973). (15) C. N. Mooers, “Zatocoding Applied to the Mechanical Organization of Knowledge”, Am. Doc., 2, 20-32 (1951). (16) If we were dealing with binary variables instead of descriptors, as differentiated in the opening paragraphs, then the optimal incidence would be 1 / 2 instead of 1 / e . (17) D. E. Knuth, “The Art of Computer Programming. Vol. I. Fundamental Algorithms”, Addison-Wesley, Reading, Mass., 1968, p 179.",
"As a firs l~ step in many computer language processing systems, each word in a natural language sentence must be coded as to its form-class or part of speech. This paper describes a computational grammar coder which has been completely programmed and is oper~tional on Lhe IBM 7090. It is part of a complete syntactic annlysis system for which it accomplishes word-class coding, using a computational approach rather than the usual method of dictionary lookup. The resulting system is completely contained in less than 1~,000 computer words. It processes running English text on the IBM 7090 at a rate of more than 1250 words per minute. Since the system is not dependent on large dictionaries, it operates on any ordinary English text. In preliminary experiments with scientific text, the system correctly and unambiguously coded over 90 percent of the words in two samples of scientific writing. A fair proportion of the remaining ambiguity can be removed at higher levels of synvactic analysis, but the problem of structural ambiguity in natural languages is seen to be a critical one in the development of practical language processing systems."
],
"authors": [
{
"name": [
"E. Garfield"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"Stanley M. Cohen",
"David L. Dayton",
"R. Salvador"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"Sheldon Klein",
"R. F. Simmons"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
}
],
"arxiv_id": [
null,
null,
null
],
"s2_corpus_id": [
"18310032",
"26351951",
"826222"
],
"intents": [
[],
[],
[]
],
"isInfluential": [
false,
false,
false
]
} | null | 504 | 0.003968 | null | null | null | null | null | null | null | null |
e784d0f2b686379fa6e1d1425dc6b73efd42394b | 8056720 | null | COMPUTER-ASSISTED TRANSLATION SYSTEMS: The Standard Design and A Multi-level Design | The standard design for a computer-assisted translation system consists of data entry of source text, machine translation, and revision of raw machine translation. This paper discusses this standard design and presents an alternative multilevel design consisting of integrated word processing, terminology aids, preprocessing aids and a link to an off-line machine translation system. Advantages of the new design are discussed. I THE STANDARD DESIGN FOR A COMPUTER-ASSISTED TRANSLATION SYSTEM. The standard design for a computer-assisted translation system consists of three phases: (A) data entry of the source text, (B) machine translation of the text, and (C) human revision of the raw machine translation. Most machine translation projects of the past thirty years have used this design without questioning its validity, yet it may not be optimal. This section will discuss this design and some possible objections to it. | {
"name": [
"Melby, Alan K."
],
"affiliation": [
null
]
} | null | null | First Conference on Applied Natural Language Processing | 1983-02-01 | 4 | 15 | null | The standard design for a computer-assisted translation system consists of three phases: (A) data entry of the source text, (B) machine translation of the text, and (C) human revision of the raw machine translation. Most machine translation projects of the past thirty years have used this design without questioning its validity, yet it may not be optimal. This section will discuss this design and some possible objections to it.The data entry phase may be trivial if the source text is available in machine-readable form already or can be optically scanned, or it may involve considerable overhead if the text must be entered on a keyboard and proofread.The actual machine translation is usually of the whole text. That is, the system is generally designed to produce some output for each sentence of the source text. Of course, some sentences will not receive a full analysis and so there will be a considerable variation in the quality of the output from sentence to sentence. Also, there may be several possible translations for a given word within the same gramatical category and subject matter so that the system must choose one of the translations arbitrarily. That choice may of course be appropriate or inappropriate. It is well-known that for these and other reasons, a machine translation of a whole text is usually of rather uneven quality. There is an alternative to translating the whole text --na~nely, "selective translation," a notion which will be discussed further later on.Revision of the raw machine translation by a human translator seems at first to be an attractive way to compensate for whatever errors may occur in the raw machine translation. However, revision is effective only if the raw translation is already nearly acceptable. Brinkmann (Ig8O) concluded that even if only 20% of the text needs revision, it is better to translate from scratch instead of revising.The author worked on a system with this standard design for a whole decade (from 1970 to 1980) . This design can, of course, work very well. The author's major objection to this ~esign is that it must be almost perfect or it is nearly useless. In other words, the system does not become progressively more useful as the output improves from being 50% correct to 60% to 70% to 80% to 90%. Instead, the system is nearly useless as the output improves and passes some threshold of quality. Then, all of a sudden, the system becomes very useful. It would, of course, be preferable to work with a design which allows the system to become progressivelv more useful.Here is a summary of objections to the standard design: WHY COMPUTATIONAL LINGUISTS 00 NOT LIKE IT: Because even if the algorithms start out "clean", they must be kludged to make sure that somethino comes out for every sentence that goes in.Because they feel that they are tools of the system instead of artists using a tool.Because the system has to be worked on for a lonQ time and be almost perfect before it can be determined whether or not any useful result will be obtained.There has been for some time a real alternative to the standard design --namely, translator aids. These translator aids have been principally terminology aids of various kinds and some use of standard word processing. These aids have been found to be clearly useful. However, they have not attracted the attention of computational linguists because they do not involve any really interesting or challengina linguistic processing. This is not to say that they are trivial. It is, in fact, quite difficult to perfect a reliable, user-frlendly word processor or a secure, easy to use automated dictionary. But the challenge is more in the area of computer science and engineering than in computational linguistics.Until now, there has not been much real integration of work in machine translation and translator aids. This paper is a proposal for a system design which allows Just such an integration. The proposed system consists of two pieces of hardware: (1) a translator work station (probably a single-user micro-computer) and (2) a "selective" machine translation system (probably running on a mainframe). The translator work station is a three-level system of aids. All three levels look much the same to the translater. At each level, the translator works at a keyboard and video display. The display is divided into two major windows. The bottom window contains the current segment of translated text. It is a work area, and nothing goes in it except what the translator puts there. The upper window contains various aids such as dictionary entries segments of source text, or suggested translation~ To the translator, the difference between the various levels is simply the nature of the aids that appear in the upper window; and the translator in all cases produces the translation a segment at a time in the lower window. Internally, however, the three levels are vastly different.Level 1 is the lowest level of aid to the translator. At this level, there is no need for data ent~ of the source text. The translator can sit down with a source text on paper and begin translating immediately. The system at this level includes word processing of the target text, access to a terminology file, and access to an expansion code file to speed up use of connmnly encountered terms. Level 2 is an intermediate level at which the source text must be available in machine readable form. It can be entered remotely and supplied to the translator (e.g. on a diskette) or it can be entered at the translator work station. Level 2 provides all the aids available at level l and two additional aids -° (a) preprocessing of the source text to search for unusual or misspelled terms, etc., and (b) dynamic processing of the source text as it is translated. The translator sees in the upper window the current segment of text to be translated and suggested translations of selected words and phrases found by automatically identifying the words of the current segment of source text and looking them up in the bilingual dictionary that can be accessed manually in level I.Level 3 requires a separate machine translation system and an interface to it. Instead of supplying just the source text to the translator work station, the work station receives (on diskette or through a network) the source text and (for each segment of source text) either a machine 17.5 translation of the segment or an indication of the reason for failure of the machine translation system on that segment. This explains the notion of "selective" machine translation referred to previously. A selective machine translation system does not attempt to translate even segment of text. It contains a formal model of language which may or may not accept a given segment of source text. If a given segment fails in analysis, transfer, or generation, a reason is given. If no failure occurs, a machine translation of that segment is produced and a problem record is attached to the segment indicating difflculties encountered, such as arbitrary choices made. Level 3 provides to the translator all the aids of levels l & Z. In addition, the translator has the option of specifying a maximum acceptable problem level. When a segment of source text is displayed, if the machine translation of that segment has a problem level which is low enough, the machine translation of that segment will be displayed below the source text instead of the level Z suggestions. The translator can examine the machine translation of a given segment and, if it is Judged to be good enough by the translator, the translator can pull it down into the bottom window with a single keystroke and revise it as needed. Note that writing a selective machine translation system need not mean starting from scratch. It should be possible to take any exist-Ing machine translation system and modify it to be a selective translation system. Note that the translator work station can provide valuable feedback to the machine translation development team by recording which segments of machine translation ~re seen by the translator and whether they were used and if so how revised.The standard design for a machine translation system and the alternative mul ti-level design just described use essentially the same components. They both involve data entry of the source text (although the data entry is needed only at levels 2 and 3 in the multi-level design). They both involve machine translation (although the machine translation is needed only at level 3 in the multilevel design). And they both involve interaction with a human translator. In the standard design, this interaction consists of human revision of the raw machine translation. In the multi-level design, this interaction consists of human translation in which the human uses word processing, terminology lookup, and suggested translations from the computer. At one extreme (level l), the multi-level system involves no machine translation at all, and the system is little more than an integrated word processor and terminology file. At the other extreme (level 3), the multi-level system could act much the same as the standard design. If eve.e.ve~.~.sentence of the source text received a machine translation with a hiqh quality estimate, then the translation could conceivably be produced by the translator choosing to pull each segment of translated text into the translation work area and revise it as needed. The difference between the two designs becomes apparent only when the raw machine translation is not almost perfect. In that case, which is of course common, the multi-level system continues to produce translations with the human translator translating more segments using level l and level 2 aids instead of level ~ aids; the translation process continues with some loss of speed but no major difficulty. When the same raw machine translation is placed in a standard design context, the translator is expected to revise it in spite of the problems, and according to the author's experience, the translators tend to become frustrated and unhappy with their work. Both designs use the same components but put them together differently. See Figure I .Here is a summary of the arguments for a multi-level design: WHY COMPUTATIONAL LINGUISTS LIKE IT: Because they can set up a "clean" formal model and keep it clean, because there is no pressure to produce a translation for every sentence that goes in.Because the system is truly a tool for the translator. The translator is never pressured to revise the machine output. Of course, if the raw machine translation of a sentence is very good and needs only a minor change or two, the translator will naturally pull it down and revise it because that is so much faster and easier than translating from scratch.Because the system is useful after a modest investment in level I. Then level 2 is added and the system becomes more useful. While the system is being used at levels l and 2, level 3 is developed and the machine translation system becomes a useful component of the multilevel system when only a small fraction of the source sentences receive a good machine translation. Thus, there is a measurable result obtained from each increment of investment. The multi-level design grew out of a Naval Research Laboratory workshop the summer of IgBl, a paper on translator aids by Martin Kay (Ig80)~ and user reaction to a translator aid system (called a "Suggestion Box" aid) was tested on a seminar of translators fall 1981. The current implementation is on a Z-80 based micro-computer. The next implementation will be on a 16-bit micro-cnmputer with foreign language display capabllities.The author is now looking for a research machine translation system to use in level 3, e.g. ARI~E-78 (See Boitet 1982) . Further papers will discuss the successes and disappointments of a multi-level translation system. | null | null | null | null | Main paper:
:
The standard design for a computer-assisted translation system consists of three phases: (A) data entry of the source text, (B) machine translation of the text, and (C) human revision of the raw machine translation. Most machine translation projects of the past thirty years have used this design without questioning its validity, yet it may not be optimal. This section will discuss this design and some possible objections to it.The data entry phase may be trivial if the source text is available in machine-readable form already or can be optically scanned, or it may involve considerable overhead if the text must be entered on a keyboard and proofread.The actual machine translation is usually of the whole text. That is, the system is generally designed to produce some output for each sentence of the source text. Of course, some sentences will not receive a full analysis and so there will be a considerable variation in the quality of the output from sentence to sentence. Also, there may be several possible translations for a given word within the same gramatical category and subject matter so that the system must choose one of the translations arbitrarily. That choice may of course be appropriate or inappropriate. It is well-known that for these and other reasons, a machine translation of a whole text is usually of rather uneven quality. There is an alternative to translating the whole text --na~nely, "selective translation," a notion which will be discussed further later on.Revision of the raw machine translation by a human translator seems at first to be an attractive way to compensate for whatever errors may occur in the raw machine translation. However, revision is effective only if the raw translation is already nearly acceptable. Brinkmann (Ig8O) concluded that even if only 20% of the text needs revision, it is better to translate from scratch instead of revising.The author worked on a system with this standard design for a whole decade (from 1970 to 1980) . This design can, of course, work very well. The author's major objection to this ~esign is that it must be almost perfect or it is nearly useless. In other words, the system does not become progressively more useful as the output improves from being 50% correct to 60% to 70% to 80% to 90%. Instead, the system is nearly useless as the output improves and passes some threshold of quality. Then, all of a sudden, the system becomes very useful. It would, of course, be preferable to work with a design which allows the system to become progressivelv more useful.Here is a summary of objections to the standard design: WHY COMPUTATIONAL LINGUISTS 00 NOT LIKE IT: Because even if the algorithms start out "clean", they must be kludged to make sure that somethino comes out for every sentence that goes in.Because they feel that they are tools of the system instead of artists using a tool.Because the system has to be worked on for a lonQ time and be almost perfect before it can be determined whether or not any useful result will be obtained.There has been for some time a real alternative to the standard design --namely, translator aids. These translator aids have been principally terminology aids of various kinds and some use of standard word processing. These aids have been found to be clearly useful. However, they have not attracted the attention of computational linguists because they do not involve any really interesting or challengina linguistic processing. This is not to say that they are trivial. It is, in fact, quite difficult to perfect a reliable, user-frlendly word processor or a secure, easy to use automated dictionary. But the challenge is more in the area of computer science and engineering than in computational linguistics.Until now, there has not been much real integration of work in machine translation and translator aids. This paper is a proposal for a system design which allows Just such an integration. The proposed system consists of two pieces of hardware: (1) a translator work station (probably a single-user micro-computer) and (2) a "selective" machine translation system (probably running on a mainframe). The translator work station is a three-level system of aids. All three levels look much the same to the translater. At each level, the translator works at a keyboard and video display. The display is divided into two major windows. The bottom window contains the current segment of translated text. It is a work area, and nothing goes in it except what the translator puts there. The upper window contains various aids such as dictionary entries segments of source text, or suggested translation~ To the translator, the difference between the various levels is simply the nature of the aids that appear in the upper window; and the translator in all cases produces the translation a segment at a time in the lower window. Internally, however, the three levels are vastly different.Level 1 is the lowest level of aid to the translator. At this level, there is no need for data ent~ of the source text. The translator can sit down with a source text on paper and begin translating immediately. The system at this level includes word processing of the target text, access to a terminology file, and access to an expansion code file to speed up use of connmnly encountered terms. Level 2 is an intermediate level at which the source text must be available in machine readable form. It can be entered remotely and supplied to the translator (e.g. on a diskette) or it can be entered at the translator work station. Level 2 provides all the aids available at level l and two additional aids -° (a) preprocessing of the source text to search for unusual or misspelled terms, etc., and (b) dynamic processing of the source text as it is translated. The translator sees in the upper window the current segment of text to be translated and suggested translations of selected words and phrases found by automatically identifying the words of the current segment of source text and looking them up in the bilingual dictionary that can be accessed manually in level I.Level 3 requires a separate machine translation system and an interface to it. Instead of supplying just the source text to the translator work station, the work station receives (on diskette or through a network) the source text and (for each segment of source text) either a machine 17.5 translation of the segment or an indication of the reason for failure of the machine translation system on that segment. This explains the notion of "selective" machine translation referred to previously. A selective machine translation system does not attempt to translate even segment of text. It contains a formal model of language which may or may not accept a given segment of source text. If a given segment fails in analysis, transfer, or generation, a reason is given. If no failure occurs, a machine translation of that segment is produced and a problem record is attached to the segment indicating difflculties encountered, such as arbitrary choices made. Level 3 provides to the translator all the aids of levels l & Z. In addition, the translator has the option of specifying a maximum acceptable problem level. When a segment of source text is displayed, if the machine translation of that segment has a problem level which is low enough, the machine translation of that segment will be displayed below the source text instead of the level Z suggestions. The translator can examine the machine translation of a given segment and, if it is Judged to be good enough by the translator, the translator can pull it down into the bottom window with a single keystroke and revise it as needed. Note that writing a selective machine translation system need not mean starting from scratch. It should be possible to take any exist-Ing machine translation system and modify it to be a selective translation system. Note that the translator work station can provide valuable feedback to the machine translation development team by recording which segments of machine translation ~re seen by the translator and whether they were used and if so how revised.The standard design for a machine translation system and the alternative mul ti-level design just described use essentially the same components. They both involve data entry of the source text (although the data entry is needed only at levels 2 and 3 in the multi-level design). They both involve machine translation (although the machine translation is needed only at level 3 in the multilevel design). And they both involve interaction with a human translator. In the standard design, this interaction consists of human revision of the raw machine translation. In the multi-level design, this interaction consists of human translation in which the human uses word processing, terminology lookup, and suggested translations from the computer. At one extreme (level l), the multi-level system involves no machine translation at all, and the system is little more than an integrated word processor and terminology file. At the other extreme (level 3), the multi-level system could act much the same as the standard design. If eve.e.ve~.~.sentence of the source text received a machine translation with a hiqh quality estimate, then the translation could conceivably be produced by the translator choosing to pull each segment of translated text into the translation work area and revise it as needed. The difference between the two designs becomes apparent only when the raw machine translation is not almost perfect. In that case, which is of course common, the multi-level system continues to produce translations with the human translator translating more segments using level l and level 2 aids instead of level ~ aids; the translation process continues with some loss of speed but no major difficulty. When the same raw machine translation is placed in a standard design context, the translator is expected to revise it in spite of the problems, and according to the author's experience, the translators tend to become frustrated and unhappy with their work. Both designs use the same components but put them together differently. See Figure I .Here is a summary of the arguments for a multi-level design: WHY COMPUTATIONAL LINGUISTS LIKE IT: Because they can set up a "clean" formal model and keep it clean, because there is no pressure to produce a translation for every sentence that goes in.Because the system is truly a tool for the translator. The translator is never pressured to revise the machine output. Of course, if the raw machine translation of a sentence is very good and needs only a minor change or two, the translator will naturally pull it down and revise it because that is so much faster and easier than translating from scratch.Because the system is useful after a modest investment in level I. Then level 2 is added and the system becomes more useful. While the system is being used at levels l and 2, level 3 is developed and the machine translation system becomes a useful component of the multilevel system when only a small fraction of the source sentences receive a good machine translation. Thus, there is a measurable result obtained from each increment of investment. The multi-level design grew out of a Naval Research Laboratory workshop the summer of IgBl, a paper on translator aids by Martin Kay (Ig80)~ and user reaction to a translator aid system (called a "Suggestion Box" aid) was tested on a seminar of translators fall 1981. The current implementation is on a Z-80 based micro-computer. The next implementation will be on a 16-bit micro-cnmputer with foreign language display capabllities.The author is now looking for a research machine translation system to use in level 3, e.g. ARI~E-78 (See Boitet 1982) . Further papers will discuss the successes and disappointments of a multi-level translation system.
Appendix:
| null | null | null | null | {
"paperhash": [
"boitet|implementation_and_conversational_environment_of_ariane_78.4,_an_integrated_system_for_automated_translation_and_human_revision",
"melby|multi-level_translation_aids_in_a_distributed_system",
"brinkmann|terminology_data_banks_as_a_basis_for_high-quality_translation"
],
"title": [
"Implementation and Conversational Environment of ARIANE 78.4, An Integrated System for Automated Translation and Human Revision",
"Multi-Level Translation Aids in a Distributed System",
"Terminology Data Banks as a Basis for High-Quality Translation"
],
"abstract": [
"ARIANE-78.4 is a computer system designed to o f fe r an adequate environment for construct ing machine t rans la t ion programs, for running them, and for (humanly) rev is ing the rough t rans la t ions produced by the computer. ARIANE-78 has been operat iona l at GETA for more than 4 years now. This paper refers to version 4. I t has been used for a number of appl icat ions (russian and japanese, engl ish to french and malay, portuguese to engl ish) and has constant ly been amended to meet the needs of the users. Parts of th is system have been presented before [ 2 ,3 ,7 ,8 ] , but i t s whole has only been described in in ternal technical documents.",
"At COLING80, we reported on an Interactive Translation System called ITS. We will discuss three problems in the design of the first version of ITS: (1) human factors, (2) the \"all or nothing\" syndrome, and (3) traditional centralized processing. We will also discuss a new version of ITS, which is now being programmed. This new version will hopefully overcome these problems by placing the translator in control, providing multiple levels of aid, and distributing the processing.",
"Currently existing terminology data banks serve various purposes. Two major groups, i.e. standardization-oriented and translation-oriented terminology data banks are of special significance. This paper deals exclusively with translation-oriented banks and uses as an example the TEAM terminology data bank system developed by the Language Services Department of SIEMENS."
],
"authors": [
{
"name": [
"C. Boitet",
"P. Guillaume",
"M. Quezel-Ambrunaz"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"A. Melby"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"Karl-Heinz Brinkmann"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
}
]
}
],
"arxiv_id": [
null,
null,
null
],
"s2_corpus_id": [
"7252947",
"8310536",
"37327977"
],
"intents": [
[],
[],
[]
],
"isInfluential": [
false,
false,
false
]
} | Problem: The standard design for a computer-assisted translation system, involving data entry of source text, machine translation, and human revision, may not be optimal due to issues with quality and efficiency.
Solution: The paper proposes an alternative multilevel design for a computer-assisted translation system, integrating word processing, terminology aids, preprocessing aids, and a link to an off-line machine translation system, to address the limitations of the standard design and improve translation quality and efficiency. | 504 | 0.029762 | null | null | null | null | null | null | null | null |
414f16a8d73d01bebf5c8262016c919d78d0307e | 6596460 | null | A Robust Portable Natural Language Data Base Interface | A BSTRA CT This paper describes a NL data base interface which consists oF two parts: a Natural Language Processor (NLP) and a data base application program (DBAP). The NLP is a general pur!~se language processor which builds a formal representation of the meaning of the English utterances it is given. The DBAP is an algorithm with builds a query in a augmented relational algebra from the output of the NLP. This approach yields an interface which is both extremely robust and portable. | {
"name": [
"Ginsparg, Jerrold M."
],
"affiliation": [
null
]
} | null | null | First Conference on Applied Natural Language Processing | 1983-02-01 | 12 | 28 | null | This paper describes an extremely robust and portable NL data base interface which consists of two parts: a Natural Language Processor (NLP) and a data base application program (DBAP). The NLP is a general purpose language processor which builds a formal representation of the meaning of the English utterances it is given. The DBAP is an algorithm with builds a query in an augmented relational algebra from the output of the NLP.The system is portable, or data base independent, because all that is needed to set up a new data base interface are definitions for concepts the NLP doesn't have, plus what I will call the +data base connection", i.e,, the connection between the relations in the data base and the NLP's concepts. Demonstrating the portability and the robustness gained from using a general purpose NLP are the main subjects of this paper Discussion of the NLP will be limited to its interaction with the DBAP and the data base connection, which by design, is minimal. [Ginsparg 5 ] contains a description of the NLP parsing algorithm. | The formal language the NLP uses to represent meaning is a variant of semantic nets [Quillian 8] . For example, the utterances "The color of the house is green," "The house's color is green." "Green ,~ the color that the house is." would all be ~ransformed to: where "colored", "'color" and "'house" are system primitives called concepts. Each concept is an extended case frame, [Fillmore 2] . The meaning of each concept to the system is implicit in its relation to the other system concepts and the way the system manipulates it.Each concept has case preferences associated with =IS cases. For example, the case preference of color is "color and the case preference of coloredis "physical-object.The case preferences induce a network among the concepts. For example, "color is connected to "physical-object via the path: ['physical-object colored'colored color "color]. In addition. "color is connected to "writing,implement, a refinement ot" "physicalobject, by a path whose meaning is that the writing implement writes with that color. This network is used by the NLP to determine the meaning of many modifications, For example, "red pencil" is either a pencil which is red or a pencil that writes red, depending on which path is chosen. In the absence of contextual information, the NLP chooses the shortest path.In normal usage, case preferences are often broken. The meaning of the broken preference involves coercing the offending concept to another one via a path in the network. Examples are:"Turn on the soup." "Turn on the burner that has soup on it." "My car will drink beer." "The passengers in my car will drink beer"The system can understand anything it has a concept about. regardless of whether the concept is attached to a relation in the data base scheme. In the Suppliers data base from Secuon 4., parts had costs and weights associated with them, but not sizes. If a user asks "How big are each of the parts?" and the interface has a "size primitive (which it does), the query building process wdl attempt to find the relation which "size maps to and on fading wdl report back to the user. "There is no information in the data base about the size of the parts." This gives the user some informatmn about the what the data base contains, An answer like "1 don't know what "big" means." would leave the user wondering whether size information was in the data base and obtainable if only the "right" word was used.The system can interpret user statements that are not queries. If the user says "A big supplier is a supplier that supplies more than 3 projects" the NLP can use, the definition qn answering later queries. The definition is not made on a "string" basis e.g., substttuting the words of one side of the definition for the other Instead. whenever the query building algorithm encounters an mstantiated concept that is a supplier wnh the condition "size~x. big) it builds a query substnuting the condiuon from the definition that it can expand as a data base query Thus the .~vstern can handle "big london suppliers" and answer "Which sunpliers are big" which it couldn't if ~t were doing strlct string substitution.This Facility can be used to bootstrap common definitions In ,~ commercial flights application, with data base scheme, Flights(fl#,carrier,from.to,departure,arrival.stops.cost ) the word "nonstop" is defined to the system in English as, "A nonstop flight is a night that does not make any stops " and then saved along wuh the rest of the system's defimt~ons.Coercions (section 2.) can be used solve problems that may require inferences in other systems. [Grosz 6 ] discusses the query "Is there a doctor within 200 miles of Philadelphia" in the context of a scheme in which doctors are on ships and ships have distances from cities, and asserts that a system which handles this query must be able to inter that if a doctor is on a ship, and the ship is with 200 miles of Philadelphia, then the doctor is within 200 miles of Philadelphia. Using coercions, the query would be understood as "is there a ship with a doctor on it that is within 200 miles of Philadelphia?', which solves the problem immediately.Since the preference information is only used to choose among competing interpretations, broken preferences can still be understood and responded to. The preference for the supplier case is specified to •supplier but if the user says "How many parts does the sorter project supply?" the NLP will find the only interpretation and respond "projects do not supply parts, suppliers do."Ambiguities inherent in attribute values are handled using the same methods which handles words with multiple definitions. For example, 1980 may be an organization number, a telephone extension, a number, or a year.The NLP has a rudimentary (so far) expert system inference mechanism which can easily be used by the DBAP. One of the rules it uses is "If x is a precondition of y and z knows y is true then z knows x was and may still be true" One of the ['acts in the NLP knowledge base is that being married is a precondition of being divorced or widowed. If a user asks "Did Fred Smith used to be married?" in a data base with the relation Employees(name, marital-status) the system can answer correctly by using its inference mechanism. The exact method is as follows. The data base application receives the true-false question:"Fred Smith was married and Fred Smith is no longer married"Since the data base includes only current marital status information. the only way to answer the first part of the question is to inl'cr it from some other information in the data base. The data base application sends the query to the NLP inference mechanism which would ordinarily attempt to answer it by matching it against its knowledge base or by finding a theorem which would gives it something else to match ['or When called by the data base application, the inference mechanism simply uses its rules base to decide what it should match ['or, and then returns to the data base program. In this, example, the inference mechanism receives "Fred Smith was married" and using the precondition rule mentioned above, returns to the data base program, "Is Fred Smith divorced" or "is Fred Smith widowed", which can be answered by the data base. The DBAP can call the inference mechanism recursively if necessary. | null | Pseudo Cities jcity,scityThis creates a pseudo relation, Cities(cname), so that the query building algorithm can treat all attributes as if they belong to a relation. The query produced by the system will refer to the Cities relation. A postprocessor is used to remove references to pseudo relations from the final query. Pseudo relations are important because they ensure uniform handling of attributes. With the pseudo Cities relation, questions like "Who supplies every city? = and "List the cities." can be treated identically to "Who supplies every project'?" and "List the suppliers."The remainder of the data base connection is a set of switches which provide information on how to print out the relations. whether all proper nouns have been defined or are to be inferred. whether relations are multivalued, etc. The switch settings and the four components above constitute the entire data base connection, Nothing else iS needed.The network of concepts in the NLP should only be augmented for a particular data base; never changed. Yet different data base schemes will require different representations for the same word. For example, depending on the data base scheme, it could be correct to represent "box" as either, gl [sa: "part Conditions: "named(gl,box) g2 Isa: "container Conditions: "named(g2.box)g3 [sa: "boxThe solution is to define each word to map to the lowest possible concept. When a concept is encountered that has a data base relation associated with )t. there is no problem. If there )s no relauon associated with a concept, the NLp searchs For a concept that does correspond to a relation and is also a generalization ot" the concept in question. If one is found, it is used with an appropriate condilion, usually "tilled or "named. So "box" has a definition which maps to "box. In the data base connection given above. "box" would be instantiated as a "=part" since "'box" is a refinement of "'part" and no relation maps to "box,"The information in the data base connection ts primarily used m building the query (section .~). But It IS ~llso used Io augment the knowledge base of Ihe NLPThe data base connection is used to overwrite the NLP's ca~e preferences. Since Iocawd-> Supphers ()r Projects. the preference ot" localed ts spec)fied to "suppliers or "protects. This enables the NLP to interpret the first noun group )n "Do ,m', suppliers that supply widgets located nl london also supply ,~cre',vs )" as "'suppliers in London that supply widgets" rather than "supphers that ,;upph London wldgets" This )s in contrast to [Gawron 31 which u'..;es ,i separate "disambiguator" phase to ehmlnale parses that do 11()i make sense =n the conceptual scheme of the dala base.Tile additional preference informamm supplied bv the data base connection is used to induce coercions (section 2.) thai would rlot be made in the absence of the connection (~r under ,mother data base scheme. "Who supplies London" does not break any real world preferences, but does break one of the preferences induced by this data base scheme, namely that Suppliee is a "project. London. a "city, is coerced to "project via the path [*project located *located /ocanon °cityl and the question is understood to mean "Who supplies projects which are in London."As mentioned in Section 2., the NLP determines the meanin~ of many modifications by searching for connections in a semantic net. The data base connection is used to augment and highlight the existing network of the NLP. If the user says, "What colors do parts come in?', the NLP can infer that the meaning of "come-in" intended by the user is "colored since the only path through the net between "color and "part derived from the case preferences induced by the data base connection is ['part colored "colored color "color]Similarly, when given the noun group "London suppliers" the meaning is determined by tracing the shortest path through the highlighted net,['supplier located'located Iocanon "city]The longer path connecting "supplier and "city,['supplier supplier "supply suppliee *project located "location location *city]which means "the suppliers that supply to london projects" is found when the NLP rejects the first meaning because of context, If the user says "What are the locations of the London suppliers" the system assumes the second meaning since the first (in the domain of this data base scheme) leads to a tautological reading. The NLP is able to infer that "The locations of the suppliers located in London" is tautological while "The locations of the suppliers located in England" is not, because the data base connection has specified "located to be a single valued concept with its Iocarton case typed to "city. If the system were asked for the locations of suppliers in England, and it knew England was a country, the question would be interpreted as "the cities of the suppliers that are located in cities located in England."A trtee of the query building algorithm.The query budding algorithm is illustrated by tracmg its operation on the question, "Does blake supply any prolects in london'?"The NLP's meaning representation I'or this question ts shown below. The NLP treats most true-l'aise questions with indefinites as requests for the data which would make the statement true. The question's meaning is "to show the subset of london proiects that are supplied by Blake."The query building algorithm builds up the query recursively Given an instantiated concept with cases, =t expands the contents of each case and links the results together with the relation corresponding to the concept. Given an instantiated concept with conditions, it expands each condition. For the example, we have. where the: extra loin resulting f'rom the pseudo (:h=e~ relation ha', been rernoved by the post processor (section 3 )Entirely as a side effe,'t of the way the query rs generated, the -,,,,,tern can easily correct any l'alse assumptions made by the u~,,2r [Kaplan 71 . For example, if there were no projects in London. gill would be empty and system would respond, generating Irom the instantiated concept glO li.e., the names used in query correspond to the names used in the knowledge representatmnL "There arc no suppliers located in London." No additional "'.=oiated presupposition" mechanism is requ+red.The remainder of this section discusses several aspects o£ the query building process that the trace does not show.Negations are handled by introducing a set difference when necessary If the example query were "Does Blake supply any projects that aren't in London?", the expansion of g7 would have been. [n general, "or" becomes a union and "and" becomes an intersection. However, if an "and" conjunction is in a single valued case (information obtained from the data base connection), a union is used instead. Thus "Who supplies london and paris?" is interpreted as "Who supplies both London and Paris'?" and "Who is in London and Pans?" is interpreted as "Who is in London and who ~s m Paris?" )n the example data base scheme.Quantifiers are handled by a post processing phase. "Does blake supply every project in London?" is handled identically to "Does Blake supply a prolect in London'?" except that the expansion of "projects m London" is marked so that the post processor will be called. The post processor adds on a set of commands which check that the set difference of London projects and London prolects that Blake supplies is empty. The rasulhn 8 query is.g l = ~e/ect lrom Suppliers w/weresname = blake =_2 -~elect lmm Projects where jcity -london g3 = /otnSpl togl g4 = tomg3 to g2 =_5 = protect jno from g2 gO = protect ino /tom g4 g7 = {hl]~'rem'e org5 andgO g8 = empn, g7]he first tour commands are the query for "Does Blake supply a llrolect m London'?". The last tour check that no project in London is not supplied by Blake.-\ minor modification is needed to cover cases in which the query building algorithm is expanding an instantiated concept that refercnces an instuntiated concept that is being expanded in a higher recursmve call The following examples illustrate this. Consider the data base scheme below, taken from [Ullman ql. In the first query "beer" was the only attribute projected from gl [n the second, the system projected both "beer" and "drinker", because in expanding "a beer he likes" it needed to expand an instantiated concept (the one representing "who") that was already being expanded.All of these cases interact gracefully with one another. For example. there is no problem in handling "Who supplies every project that is not supplied by blake and bowles".The DBAP ~s fully implemented and debugged. The NLP is ~mplemented and sail growing. Both are implemented in Franz Lisp. a dialect of LISP Language processing and query genera-i~on are performed m virtually real time (average 1-3 cpu seconds) on a Vax I 1-780The system ~s intended to be used with u Data Base Management system. The interface between the DBAP and the I-)BMS is a ,,tralghtforward translator from relational algebra to the query language of Ihe DflMS I have written a Ir;.mslator I'or Polaris [Gielan 41 .The system handles all the examples in this paper as well as a wide range of others (Appendix A.). Several different data bases schemes have been connected to the system for demonstrations, including one "real data base" abstracted from the on-line listing of the Bell Laboratories Company Directory. | Consider the data base given by the following scheme:SuppIiers(sno,sname,scity) Projects(jno,jname,jcity ) Parts(pno,pname,color.cosl,weight) Spj ( sno,pno,jno.quantity ,m, y ) Suppliers and proiects have a number, )~ame and c~tV Parts ha'.,: a number, name, color, cost and weight Supplier wl(~ ,,unphe,, a quanntYof parts pno to prolect /no in month ,nor yearThe data base connection has four parts: I. Connecting each relation to the appropriate concept: Suppliers -> "supplier Pro)ects -:> "project Parts-> "part Spj-> "supply 2. Connecting each attribute to the appropriate concept: sno,pnojno -> "indexing-number sname,pname,jname-> "name jclty,scity -> "city m-> "month y-> "year COSt-> "COSt weight-> "weight quantity -> "quantity 3. Capturing the information implicit in each relation:Parts (pno,pname,color,cost,weight ) "indexnumberp indexnumber-> pno numbered-> Parts "named name-> pname named-> Parts "colored color -> color colored-> PartsCOSt -> cost costobj -> Parts "weighs weight-> weight weightobj-> Parts Projects(jno.jnamedcity) "indexnumberp indexnumber -> jno numbered-> Projects "named name -> jname named-> Projects "located location -> jetty located -> Prolects Suppliers(sno,sname,scity) "indexnumberp indexnumber -> sno numbered-> Suppliers "named name -> sname named-> Suppliers "located location -> sctty located -> Suppliers %pl O~no.pno.lno.quant Hv.m.y ) "supply supplier -> '.;no supplied -> pno suppliee -> mo (cardinality-of pno) -> quantity ume-> m.y "spend spender -> 1no spendfor -> pno amount (" cost quantity)The amoum case of "spend maps to a computation rather than a ,~mgle attribute It' all the attributes in the computahon are not present ,n the relation being defined, the query building program ioms ,n the necessary extra relations. So the definition of "spend ~mrks equally well irl tile example scheme as well as in a scheme leg., Spj(sno,pno,jno,cost,quantity)) in which the cost ol a part depended on the supplier | Main paper:
introduction:
This paper describes an extremely robust and portable NL data base interface which consists of two parts: a Natural Language Processor (NLP) and a data base application program (DBAP). The NLP is a general purpose language processor which builds a formal representation of the meaning of the English utterances it is given. The DBAP is an algorithm with builds a query in an augmented relational algebra from the output of the NLP.The system is portable, or data base independent, because all that is needed to set up a new data base interface are definitions for concepts the NLP doesn't have, plus what I will call the +data base connection", i.e,, the connection between the relations in the data base and the NLP's concepts. Demonstrating the portability and the robustness gained from using a general purpose NLP are the main subjects of this paper Discussion of the NLP will be limited to its interaction with the DBAP and the data base connection, which by design, is minimal. [Ginsparg 5 ] contains a description of the NLP parsing algorithm.
nlp overview:
The formal language the NLP uses to represent meaning is a variant of semantic nets [Quillian 8] . For example, the utterances "The color of the house is green," "The house's color is green." "Green ,~ the color that the house is." would all be ~ransformed to: where "colored", "'color" and "'house" are system primitives called concepts. Each concept is an extended case frame, [Fillmore 2] . The meaning of each concept to the system is implicit in its relation to the other system concepts and the way the system manipulates it.Each concept has case preferences associated with =IS cases. For example, the case preference of color is "color and the case preference of coloredis "physical-object.The case preferences induce a network among the concepts. For example, "color is connected to "physical-object via the path: ['physical-object colored'colored color "color]. In addition. "color is connected to "writing,implement, a refinement ot" "physicalobject, by a path whose meaning is that the writing implement writes with that color. This network is used by the NLP to determine the meaning of many modifications, For example, "red pencil" is either a pencil which is red or a pencil that writes red, depending on which path is chosen. In the absence of contextual information, the NLP chooses the shortest path.In normal usage, case preferences are often broken. The meaning of the broken preference involves coercing the offending concept to another one via a path in the network. Examples are:"Turn on the soup." "Turn on the burner that has soup on it." "My car will drink beer." "The passengers in my car will drink beer"
the data base connection:
Consider the data base given by the following scheme:SuppIiers(sno,sname,scity) Projects(jno,jname,jcity ) Parts(pno,pname,color.cosl,weight) Spj ( sno,pno,jno.quantity ,m, y ) Suppliers and proiects have a number, )~ame and c~tV Parts ha'.,: a number, name, color, cost and weight Supplier wl(~ ,,unphe,, a quanntYof parts pno to prolect /no in month ,nor yearThe data base connection has four parts: I. Connecting each relation to the appropriate concept: Suppliers -> "supplier Pro)ects -:> "project Parts-> "part Spj-> "supply 2. Connecting each attribute to the appropriate concept: sno,pnojno -> "indexing-number sname,pname,jname-> "name jclty,scity -> "city m-> "month y-> "year COSt-> "COSt weight-> "weight quantity -> "quantity 3. Capturing the information implicit in each relation:Parts (pno,pname,color,cost,weight ) "indexnumberp indexnumber-> pno numbered-> Parts "named name-> pname named-> Parts "colored color -> color colored-> PartsCOSt -> cost costobj -> Parts "weighs weight-> weight weightobj-> Parts Projects(jno.jnamedcity) "indexnumberp indexnumber -> jno numbered-> Projects "named name -> jname named-> Projects "located location -> jetty located -> Prolects Suppliers(sno,sname,scity) "indexnumberp indexnumber -> sno numbered-> Suppliers "named name -> sname named-> Suppliers "located location -> sctty located -> Suppliers %pl O~no.pno.lno.quant Hv.m.y ) "supply supplier -> '.;no supplied -> pno suppliee -> mo (cardinality-of pno) -> quantity ume-> m.y "spend spender -> 1no spendfor -> pno amount (" cost quantity)The amoum case of "spend maps to a computation rather than a ,~mgle attribute It' all the attributes in the computahon are not present ,n the relation being defined, the query building program ioms ,n the necessary extra relations. So the definition of "spend ~mrks equally well irl tile example scheme as well as in a scheme leg., Spj(sno,pno,jno,cost,quantity)) in which the cost ol a part depended on the supplier
creating pseudo relations:
Pseudo Cities jcity,scityThis creates a pseudo relation, Cities(cname), so that the query building algorithm can treat all attributes as if they belong to a relation. The query produced by the system will refer to the Cities relation. A postprocessor is used to remove references to pseudo relations from the final query. Pseudo relations are important because they ensure uniform handling of attributes. With the pseudo Cities relation, questions like "Who supplies every city? = and "List the cities." can be treated identically to "Who supplies every project'?" and "List the suppliers."The remainder of the data base connection is a set of switches which provide information on how to print out the relations. whether all proper nouns have been defined or are to be inferred. whether relations are multivalued, etc. The switch settings and the four components above constitute the entire data base connection, Nothing else iS needed.The network of concepts in the NLP should only be augmented for a particular data base; never changed. Yet different data base schemes will require different representations for the same word. For example, depending on the data base scheme, it could be correct to represent "box" as either, gl [sa: "part Conditions: "named(gl,box) g2 Isa: "container Conditions: "named(g2.box)g3 [sa: "boxThe solution is to define each word to map to the lowest possible concept. When a concept is encountered that has a data base relation associated with )t. there is no problem. If there )s no relauon associated with a concept, the NLp searchs For a concept that does correspond to a relation and is also a generalization ot" the concept in question. If one is found, it is used with an appropriate condilion, usually "tilled or "named. So "box" has a definition which maps to "box. In the data base connection given above. "box" would be instantiated as a "=part" since "'box" is a refinement of "'part" and no relation maps to "box,"The information in the data base connection ts primarily used m building the query (section .~). But It IS ~llso used Io augment the knowledge base of Ihe NLPThe data base connection is used to overwrite the NLP's ca~e preferences. Since Iocawd-> Supphers ()r Projects. the preference ot" localed ts spec)fied to "suppliers or "protects. This enables the NLP to interpret the first noun group )n "Do ,m', suppliers that supply widgets located nl london also supply ,~cre',vs )" as "'suppliers in London that supply widgets" rather than "supphers that ,;upph London wldgets" This )s in contrast to [Gawron 31 which u'..;es ,i separate "disambiguator" phase to ehmlnale parses that do 11()i make sense =n the conceptual scheme of the dala base.Tile additional preference informamm supplied bv the data base connection is used to induce coercions (section 2.) thai would rlot be made in the absence of the connection (~r under ,mother data base scheme. "Who supplies London" does not break any real world preferences, but does break one of the preferences induced by this data base scheme, namely that Suppliee is a "project. London. a "city, is coerced to "project via the path [*project located *located /ocanon °cityl and the question is understood to mean "Who supplies projects which are in London."As mentioned in Section 2., the NLP determines the meanin~ of many modifications by searching for connections in a semantic net. The data base connection is used to augment and highlight the existing network of the NLP. If the user says, "What colors do parts come in?', the NLP can infer that the meaning of "come-in" intended by the user is "colored since the only path through the net between "color and "part derived from the case preferences induced by the data base connection is ['part colored "colored color "color]Similarly, when given the noun group "London suppliers" the meaning is determined by tracing the shortest path through the highlighted net,['supplier located'located Iocanon "city]The longer path connecting "supplier and "city,['supplier supplier "supply suppliee *project located "location location *city]which means "the suppliers that supply to london projects" is found when the NLP rejects the first meaning because of context, If the user says "What are the locations of the London suppliers" the system assumes the second meaning since the first (in the domain of this data base scheme) leads to a tautological reading. The NLP is able to infer that "The locations of the suppliers located in London" is tautological while "The locations of the suppliers located in England" is not, because the data base connection has specified "located to be a single valued concept with its Iocarton case typed to "city. If the system were asked for the locations of suppliers in England, and it knew England was a country, the question would be interpreted as "the cities of the suppliers that are located in cities located in England."A trtee of the query building algorithm.The query budding algorithm is illustrated by tracmg its operation on the question, "Does blake supply any prolects in london'?"The NLP's meaning representation I'or this question ts shown below. The NLP treats most true-l'aise questions with indefinites as requests for the data which would make the statement true. The question's meaning is "to show the subset of london proiects that are supplied by Blake."The query building algorithm builds up the query recursively Given an instantiated concept with cases, =t expands the contents of each case and links the results together with the relation corresponding to the concept. Given an instantiated concept with conditions, it expands each condition. For the example, we have. where the: extra loin resulting f'rom the pseudo (:h=e~ relation ha', been rernoved by the post processor (section 3 )Entirely as a side effe,'t of the way the query rs generated, the -,,,,,tern can easily correct any l'alse assumptions made by the u~,,2r [Kaplan 71 . For example, if there were no projects in London. gill would be empty and system would respond, generating Irom the instantiated concept glO li.e., the names used in query correspond to the names used in the knowledge representatmnL "There arc no suppliers located in London." No additional "'.=oiated presupposition" mechanism is requ+red.The remainder of this section discusses several aspects o£ the query building process that the trace does not show.Negations are handled by introducing a set difference when necessary If the example query were "Does Blake supply any projects that aren't in London?", the expansion of g7 would have been. [n general, "or" becomes a union and "and" becomes an intersection. However, if an "and" conjunction is in a single valued case (information obtained from the data base connection), a union is used instead. Thus "Who supplies london and paris?" is interpreted as "Who supplies both London and Paris'?" and "Who is in London and Pans?" is interpreted as "Who is in London and who ~s m Paris?" )n the example data base scheme.Quantifiers are handled by a post processing phase. "Does blake supply every project in London?" is handled identically to "Does Blake supply a prolect in London'?" except that the expansion of "projects m London" is marked so that the post processor will be called. The post processor adds on a set of commands which check that the set difference of London projects and London prolects that Blake supplies is empty. The rasulhn 8 query is.g l = ~e/ect lrom Suppliers w/weresname = blake =_2 -~elect lmm Projects where jcity -london g3 = /otnSpl togl g4 = tomg3 to g2 =_5 = protect jno from g2 gO = protect ino /tom g4 g7 = {hl]~'rem'e org5 andgO g8 = empn, g7]he first tour commands are the query for "Does Blake supply a llrolect m London'?". The last tour check that no project in London is not supplied by Blake.-\ minor modification is needed to cover cases in which the query building algorithm is expanding an instantiated concept that refercnces an instuntiated concept that is being expanded in a higher recursmve call The following examples illustrate this. Consider the data base scheme below, taken from [Ullman ql. In the first query "beer" was the only attribute projected from gl [n the second, the system projected both "beer" and "drinker", because in expanding "a beer he likes" it needed to expand an instantiated concept (the one representing "who") that was already being expanded.All of these cases interact gracefully with one another. For example. there is no problem in handling "Who supplies every project that is not supplied by blake and bowles".
advantages of this approach:
The system can understand anything it has a concept about. regardless of whether the concept is attached to a relation in the data base scheme. In the Suppliers data base from Secuon 4., parts had costs and weights associated with them, but not sizes. If a user asks "How big are each of the parts?" and the interface has a "size primitive (which it does), the query building process wdl attempt to find the relation which "size maps to and on fading wdl report back to the user. "There is no information in the data base about the size of the parts." This gives the user some informatmn about the what the data base contains, An answer like "1 don't know what "big" means." would leave the user wondering whether size information was in the data base and obtainable if only the "right" word was used.The system can interpret user statements that are not queries. If the user says "A big supplier is a supplier that supplies more than 3 projects" the NLP can use, the definition qn answering later queries. The definition is not made on a "string" basis e.g., substttuting the words of one side of the definition for the other Instead. whenever the query building algorithm encounters an mstantiated concept that is a supplier wnh the condition "size~x. big) it builds a query substnuting the condiuon from the definition that it can expand as a data base query Thus the .~vstern can handle "big london suppliers" and answer "Which sunpliers are big" which it couldn't if ~t were doing strlct string substitution.This Facility can be used to bootstrap common definitions In ,~ commercial flights application, with data base scheme, Flights(fl#,carrier,from.to,departure,arrival.stops.cost ) the word "nonstop" is defined to the system in English as, "A nonstop flight is a night that does not make any stops " and then saved along wuh the rest of the system's defimt~ons.Coercions (section 2.) can be used solve problems that may require inferences in other systems. [Grosz 6 ] discusses the query "Is there a doctor within 200 miles of Philadelphia" in the context of a scheme in which doctors are on ships and ships have distances from cities, and asserts that a system which handles this query must be able to inter that if a doctor is on a ship, and the ship is with 200 miles of Philadelphia, then the doctor is within 200 miles of Philadelphia. Using coercions, the query would be understood as "is there a ship with a doctor on it that is within 200 miles of Philadelphia?', which solves the problem immediately.Since the preference information is only used to choose among competing interpretations, broken preferences can still be understood and responded to. The preference for the supplier case is specified to •supplier but if the user says "How many parts does the sorter project supply?" the NLP will find the only interpretation and respond "projects do not supply parts, suppliers do."Ambiguities inherent in attribute values are handled using the same methods which handles words with multiple definitions. For example, 1980 may be an organization number, a telephone extension, a number, or a year.The NLP has a rudimentary (so far) expert system inference mechanism which can easily be used by the DBAP. One of the rules it uses is "If x is a precondition of y and z knows y is true then z knows x was and may still be true" One of the ['acts in the NLP knowledge base is that being married is a precondition of being divorced or widowed. If a user asks "Did Fred Smith used to be married?" in a data base with the relation Employees(name, marital-status) the system can answer correctly by using its inference mechanism. The exact method is as follows. The data base application receives the true-false question:"Fred Smith was married and Fred Smith is no longer married"Since the data base includes only current marital status information. the only way to answer the first part of the question is to inl'cr it from some other information in the data base. The data base application sends the query to the NLP inference mechanism which would ordinarily attempt to answer it by matching it against its knowledge base or by finding a theorem which would gives it something else to match ['or When called by the data base application, the inference mechanism simply uses its rules base to decide what it should match ['or, and then returns to the data base program. In this, example, the inference mechanism receives "Fred Smith was married" and using the precondition rule mentioned above, returns to the data base program, "Is Fred Smith divorced" or "is Fred Smith widowed", which can be answered by the data base. The DBAP can call the inference mechanism recursively if necessary.
implementation status and details:
The DBAP ~s fully implemented and debugged. The NLP is ~mplemented and sail growing. Both are implemented in Franz Lisp. a dialect of LISP Language processing and query genera-i~on are performed m virtually real time (average 1-3 cpu seconds) on a Vax I 1-780The system ~s intended to be used with u Data Base Management system. The interface between the DBAP and the I-)BMS is a ,,tralghtforward translator from relational algebra to the query language of Ihe DflMS I have written a Ir;.mslator I'or Polaris [Gielan 41 .The system handles all the examples in this paper as well as a wide range of others (Appendix A.). Several different data bases schemes have been connected to the system for demonstrations, including one "real data base" abstracted from the on-line listing of the Bell Laboratories Company Directory.
Appendix:
| null | null | null | null | {
"paperhash": [
"grosz|transportable_natural-language_interfaces:_problems_and_techniques",
"gawron|processing_english_with_a_generalized_phrase_structure_grammar",
"ullman|principles_of_database_systems"
],
"title": [
"Transportable Natural-Language Interfaces: Problems and Techniques",
"Processing English With a Generalized Phrase Structure Grammar",
"Principles of Database Systems"
],
"abstract": [
"I will address the questions posed to the panel from wlthln the context of a project at SRI, TEAM [Grosz, 1982b], that is developing techniques for transportable natural-language interfaces. The goal of transportability is to enable nonspeciallsts to adapt a natural-language processing system for access to an existing conventional database. TEAM is designed to interact with two different kinds of users. During an acquisition dlalogue, a database expert (DBE) provides TEAM with information about the files and fields in the conventlonal database for which a natural-language interface is desired. (Typlcally this database already exists and is populated, but TEAM also provides facillties for creating small local databases.) This dlalogue results in extension of the language-processlng and data access components that make it possible for an end user to query the new database in natural language.",
"This paper describes a natural language processing system implemented at Hewlett-Packard's Computer Research Center. The system's main components are: a Generalized Phrase Structure Grammar (GPSG); a top-down parser; a logic transducer that outputs a first-order logical representation; and a \"disambiguator\" that uses sortal information to convert \"normal-form\" first-order logical expressions into the query language for HIRE, a relational database hosted in the SPHERE system. We argue that theoretical developments in GPSG syntax and in Montague semantics have specific advantages to bring to this domain of computational linguistics. The syntax and semantics of the system are totally domain-independent, and thus, in principle, highly portable. We discuss the prospects for extending domain-independence to the lexical semantics as well, and thus to the logical semantic representations.",
"A large part is a description of relations, their algebra and calculus, and the query languages that have been designed using these concepts. There are explanations of how the theory can be used to design good systems. A description of the optimization of queries in relation-based query languages is provided, and a chapter is devoted to the recently developed protocols for guaranteeing consistency in databases that are operated on by many processes concurrently"
],
"authors": [
{
"name": [
"B. Grosz"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"J. Gawron",
"Jonathan J. King",
"J. Lamping",
"E. Loebner",
"E. Anne Paulson",
"G. Pullum",
"Ivan Sag",
"T. Wasow"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"J. Ullman"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
}
]
}
],
"arxiv_id": [
null,
null,
null
],
"s2_corpus_id": [
"12762659",
"14372141",
"61817775"
],
"intents": [
[],
[],
[]
],
"isInfluential": [
false,
false,
false
]
} | Problem: The paper aims to describe a robust and portable NL database interface consisting of a Natural Language Processor (NLP) and a database application program (DBAP).
Solution: The hypothesis of the paper is that by utilizing a general-purpose NLP to build a formal representation of English utterances and an algorithmic DBAP to generate queries, a highly robust and portable database interface can be achieved. | 504 | 0.055556 | null | null | null | null | null | null | null | null |
fefa5753b1ea48d768cdc9e0bc1b568e64cb083a | 6911768 | null | Problems in Natural-Language Interface to {DBMS} With Examples From {EUFID} | For five years the End-User Friendly Interface to Data management (EUFID) project team at System Development Corporation worked on the design and implementation of a Natural-Language Interface (NLI) system that was to be independent of both the application and the database management system. In this paper we describe application, natural-language and database management problems involved in NLI development, with specific reference to the EUFID system as an example. | {
"name": [
"Templeton, Marjorie and",
"Burger, John"
],
"affiliation": [
null,
null
]
} | null | null | First Conference on Applied Natural Language Processing | 1983-02-01 | 17 | 68 | null | null | null | For five years the End-User Friendly Interface to Data management (EUFID) project team at System Development Corporation worked on the design and implementation of a Natural-Language Interface (NLI) system that was to be independent of both the application and the database management system. In this paper we describe application, natural-language and database management problems involved in NLI development, with specific reference to the EUFID system as an example. Language", "WorldLanguage", and "Data Base Language" and appear to correspond roughly to the "external", "conceptual", and "internal" views of data as described by C. J. Date[DATE77]. PHLIQAI can interface to a variety of database structures and DBMSs.The -to be application independent. This means that the program must be table driven.The tables contain the dictionary and semantic information and are loaded with application-speciflc data.It was desired that the tables could be constructed by someone other than the EUFID staff, so This language represents, in many ways, the union of the capabilities of many "target" DBMS query languages.thatThe EUFID system consists of three major modules, not counting the DBM3 (see Figure I ).The values which can be be used to convert from one unit of measure to another (e.g., feet to meters).These data are used by the run-time modules which map and translate the tree-structured output of the analyzer to IL on the actual group/field names of the database, and then co the language of the DBMS.These modules are discussed in the next sections.The EUFID Analyzer "root" to the database structure.Access may be made from any relation to any other relation as long as there is a field in each of the two relations which has the same "domain" (set of values). These are discussed below.REPortingAn English word may have more than one definition without complicating the analysis strategy. For example, "ship" as a vessel and as a verb meaning "to send" can be defined in the same dictionary.Words used as database values, such as names, may also have multiple definitions, e.g., "New York" used as the name of both a city and a state. of which must ~oin both the company ("c =) and the warehouse ('w') relations to the =cw" relation. Values can also be used in a query to qualify or select certain records for output, e.g., in the above question "North Hills" and "Superior" are values that must be represented in the query to the DBMS.As long as the alphanumeric values used in a particular database field are the same as words in the English questions, there are no difficult problems involved in recognizing values as selectors in a query.There are three basic ways to recognize these value words in a question. They can be explicitly listed in the dictionary, recognized by a pattern or context, or found in the database itself.If the value words are stored in the dictionary, they can be subject to spelling correction because the spelling corrector uses the dictionary to locate words which are a close match to unrecognized words in a question. In APPLICANT, however, each applicant has a set of "specialties" such as "computer programmer", "accounting clerk", or "gardener". These are all stored as values of the specialty field in the database. Essentially syntactic information is used only when needed to resolve ambiguity.The language features that this technique has to handle are common to any NLI, and some of the problem areas are described in the following sections.To support natural interaction it is desirable to allow the use of anaphoric reference and elliptical constructions across sentence sequences, such as "What applicants know Fortran and C?", "Which of them live in California?", "In Nevada?", "How many know Pascal?'. One of the biggest problems is to define the scope of the reference in such cases.In the example, it is not clear whether the user wishes to retrieve the set of all applicants who know Pascal or only the subset who live in Nevada.One solution is to provide commands that allow users to define subsets of the database to which to address questions. This removes the ambiguity and speeds up retrieval time on a large database.However, it moves the NLI interaction toward that of a structured query language, and forces the user to be aware of the level of subset being accessed. | In normal NLI interaction users may wish to ask "yes/no" questions, yet no DBMS has the ability to answer "yes" or "no" explicitly.The EUFID mapper maps a yes/no question into a query which will retrieve some data, such as an "output identifier" or default name for a concept, if the answer is "yes" and no data if the answer if "no".However, the answer may be "no" for several reasons. The NLI user is not expected to understand exactly how data is stored, and yet must understand something about the granularity of the data.Time fields often cause problems because time may be given by year or by fractions of a second.Users may make time comparisons that require more granularity than is stored in the database.For example, the user can ask "What incidents were reported at SAC while system release 3.4 was installed?". If incidents were reported by day but system release dates were given by month, the system would return incidents which occurred in the days of the month before the system release was installed.The scope of conjunctions is a difficult problem for any parsing or analyzing algorithm.The natural-language use of "and" and "or" does not necessarily correspond to the logical meaning, as in the question "List the applicants who live in California and Arizona.".Multiple conjunctions in a single question can be ambiguous as in "which minority and female applicants know Fortran and Cobol?'. This could be interpreted with logical "and" or with logical "or" as in "Which applicants who are minority or female know either Fortran or Cobol?".The EUFID mapper will change English "and" to logical "or" when the two phrases within the scope of the conjunction are values for the same field. In the example above, an applicant has only one state of residence. uncertain whether they should be returned in the answer. It is also difficult to take a complement of a set of data using the many data management systems that do not support set operators between relations.Questions which require a "yes" or "no" response are difficult to answer because often the "no" is due to a presupposition which is invalid. This is especially true with negation.For example, if the user asks, "Does every company in North Hills except Supreme use NH2?", the answer may be "no" because Supreme is not in North Hills.The current implementation of EUFID does not allow explicit negation, although some negative concepts are handled such as "What companies ship to companies other than Colonial?". "Other than" is interpreted as the "!-" operator in exactly the same way that "greater than" is interpreted as ">".Many questions make perfect sense semantically but are difficult to map into DBMS queries because of the database structure.The problems become worse when access is through an NLI because of increased expectations on the part of the user and because it may be difficult for a help system adequately to describe the problem to the user who is unaware of the database structure.Negative requests may contain explicit negative words such as "not" and "never" or may contain implicit negatives such as "only", "except" and "other than" [OLNE78] . In the open world database, which we encounter most of the time, a response of "not that this database knows of" might be more appropriate.The design of the IL is critical. It must be rich enough to support retrieval from all the underlying DBMSs. However, if it contains capabilities that do not exist in a specific DBMS, it is difficult to describe this deficiency to the user.In APPLICANT, the user cannot get both the major and minor fields of study by asking "List applicants and field of study", because a limitation in the EUFID IL prevents making two joins between education and subject records.This problem was corrected in a subsequent version of IL with the addition of a "range" state- in the proper case, the value will not match.A very simple question in English can turn into a very complicated request in the query language if it involves retrieval of data which must be used for qualification in another part of the same query.In IL these are called "nested queries".Most often some qualification needs to be done both "inside" and "outside" the clause of the query that does the internal retrieve. There are problems that need to be solved on both the front end, the parsing of the English question, and the back end, the translation of the question into a data management system query.It is important to understand the types of requests, types of functions, and types of databases that can be supported by a specific NLI. the users of the NLI should have a common use for the data and a common vlew of the data, and 7.there must be some user who understands the questions that will be asked and is available to work with the developers of the NLI.We believe that current system development is limited by the need for good semantic modelling techniques and the length of time needed to build the knowledge base required to interface with a new application. When the knowledge base for the NLI is developed, the database as well as sample input must be considered in the design.Parsing of questions to a database cannot be divorced from the database contents since semantic interpretation can only be determined in the context of that database. On the other hand, a robust system cannot be developed by considering only database structure and content, because the range of the questions allowed would not accurately reflect the user view of the application and also would not account for all the information that is inferred at some level. | null | Main paper:
yes/no questions:
In normal NLI interaction users may wish to ask "yes/no" questions, yet no DBMS has the ability to answer "yes" or "no" explicitly.The EUFID mapper maps a yes/no question into a query which will retrieve some data, such as an "output identifier" or default name for a concept, if the answer is "yes" and no data if the answer if "no".However, the answer may be "no" for several reasons. The NLI user is not expected to understand exactly how data is stored, and yet must understand something about the granularity of the data.Time fields often cause problems because time may be given by year or by fractions of a second.Users may make time comparisons that require more granularity than is stored in the database.For example, the user can ask "What incidents were reported at SAC while system release 3.4 was installed?". If incidents were reported by day but system release dates were given by month, the system would return incidents which occurred in the days of the month before the system release was installed.
conjunctions:
The scope of conjunctions is a difficult problem for any parsing or analyzing algorithm.The natural-language use of "and" and "or" does not necessarily correspond to the logical meaning, as in the question "List the applicants who live in California and Arizona.".Multiple conjunctions in a single question can be ambiguous as in "which minority and female applicants know Fortran and Cobol?'. This could be interpreted with logical "and" or with logical "or" as in "Which applicants who are minority or female know either Fortran or Cobol?".The EUFID mapper will change English "and" to logical "or" when the two phrases within the scope of the conjunction are values for the same field. In the example above, an applicant has only one state of residence. uncertain whether they should be returned in the answer. It is also difficult to take a complement of a set of data using the many data management systems that do not support set operators between relations.Questions which require a "yes" or "no" response are difficult to answer because often the "no" is due to a presupposition which is invalid. This is especially true with negation.For example, if the user asks, "Does every company in North Hills except Supreme use NH2?", the answer may be "no" because Supreme is not in North Hills.The current implementation of EUFID does not allow explicit negation, although some negative concepts are handled such as "What companies ship to companies other than Colonial?". "Other than" is interpreted as the "!-" operator in exactly the same way that "greater than" is interpreted as ">".Many questions make perfect sense semantically but are difficult to map into DBMS queries because of the database structure.The problems become worse when access is through an NLI because of increased expectations on the part of the user and because it may be difficult for a help system adequately to describe the problem to the user who is unaware of the database structure.Negative requests may contain explicit negative words such as "not" and "never" or may contain implicit negatives such as "only", "except" and "other than" [OLNE78] . In the open world database, which we encounter most of the time, a response of "not that this database knows of" might be more appropriate.The design of the IL is critical. It must be rich enough to support retrieval from all the underlying DBMSs. However, if it contains capabilities that do not exist in a specific DBMS, it is difficult to describe this deficiency to the user.In APPLICANT, the user cannot get both the major and minor fields of study by asking "List applicants and field of study", because a limitation in the EUFID IL prevents making two joins between education and subject records.This problem was corrected in a subsequent version of IL with the addition of a "range" state- in the proper case, the value will not match.A very simple question in English can turn into a very complicated request in the query language if it involves retrieval of data which must be used for qualification in another part of the same query.In IL these are called "nested queries".Most often some qualification needs to be done both "inside" and "outside" the clause of the query that does the internal retrieve. There are problems that need to be solved on both the front end, the parsing of the English question, and the back end, the translation of the question into a data management system query.It is important to understand the types of requests, types of functions, and types of databases that can be supported by a specific NLI. the users of the NLI should have a common use for the data and a common vlew of the data, and 7.there must be some user who understands the questions that will be asked and is available to work with the developers of the NLI.We believe that current system development is limited by the need for good semantic modelling techniques and the length of time needed to build the knowledge base required to interface with a new application. When the knowledge base for the NLI is developed, the database as well as sample input must be considered in the design.Parsing of questions to a database cannot be divorced from the database contents since semantic interpretation can only be determined in the context of that database. On the other hand, a robust system cannot be developed by considering only database structure and content, because the range of the questions allowed would not accurately reflect the user view of the application and also would not account for all the information that is inferred at some level.
:
For five years the End-User Friendly Interface to Data management (EUFID) project team at System Development Corporation worked on the design and implementation of a Natural-Language Interface (NLI) system that was to be independent of both the application and the database management system. In this paper we describe application, natural-language and database management problems involved in NLI development, with specific reference to the EUFID system as an example. Language", "WorldLanguage", and "Data Base Language" and appear to correspond roughly to the "external", "conceptual", and "internal" views of data as described by C. J. Date[DATE77]. PHLIQAI can interface to a variety of database structures and DBMSs.The -to be application independent. This means that the program must be table driven.The tables contain the dictionary and semantic information and are loaded with application-speciflc data.It was desired that the tables could be constructed by someone other than the EUFID staff, so This language represents, in many ways, the union of the capabilities of many "target" DBMS query languages.thatThe EUFID system consists of three major modules, not counting the DBM3 (see Figure I ).The values which can be be used to convert from one unit of measure to another (e.g., feet to meters).These data are used by the run-time modules which map and translate the tree-structured output of the analyzer to IL on the actual group/field names of the database, and then co the language of the DBMS.These modules are discussed in the next sections.The EUFID Analyzer "root" to the database structure.Access may be made from any relation to any other relation as long as there is a field in each of the two relations which has the same "domain" (set of values). These are discussed below.REPortingAn English word may have more than one definition without complicating the analysis strategy. For example, "ship" as a vessel and as a verb meaning "to send" can be defined in the same dictionary.Words used as database values, such as names, may also have multiple definitions, e.g., "New York" used as the name of both a city and a state. of which must ~oin both the company ("c =) and the warehouse ('w') relations to the =cw" relation. Values can also be used in a query to qualify or select certain records for output, e.g., in the above question "North Hills" and "Superior" are values that must be represented in the query to the DBMS.As long as the alphanumeric values used in a particular database field are the same as words in the English questions, there are no difficult problems involved in recognizing values as selectors in a query.There are three basic ways to recognize these value words in a question. They can be explicitly listed in the dictionary, recognized by a pattern or context, or found in the database itself.If the value words are stored in the dictionary, they can be subject to spelling correction because the spelling corrector uses the dictionary to locate words which are a close match to unrecognized words in a question. In APPLICANT, however, each applicant has a set of "specialties" such as "computer programmer", "accounting clerk", or "gardener". These are all stored as values of the specialty field in the database. Essentially syntactic information is used only when needed to resolve ambiguity.The language features that this technique has to handle are common to any NLI, and some of the problem areas are described in the following sections.To support natural interaction it is desirable to allow the use of anaphoric reference and elliptical constructions across sentence sequences, such as "What applicants know Fortran and C?", "Which of them live in California?", "In Nevada?", "How many know Pascal?'. One of the biggest problems is to define the scope of the reference in such cases.In the example, it is not clear whether the user wishes to retrieve the set of all applicants who know Pascal or only the subset who live in Nevada.One solution is to provide commands that allow users to define subsets of the database to which to address questions. This removes the ambiguity and speeds up retrieval time on a large database.However, it moves the NLI interaction toward that of a structured query language, and forces the user to be aware of the level of subset being accessed.
Appendix:
| null | null | null | null | {
"paperhash": [
"burger|semantic_database_mapping_in_eufid",
"templeton|eufid:_a_friendly_and_flexible_front-end_for_data_management_systems",
"harris|the_robot_system:_natural_language_processing_applied_to_data_base_query",
"waltz|an_english_language_question_answering_system_for_a_large_relational_database",
"hendrix|developing_a_natural_language_interface_to_complex_data",
"waltz|natural_language_interfaces",
"scha|semantic_grammar:_an_engineering_technique_for_constructing_natural_language_understanding_systems",
"stonebraker|the_design_and_implementation_of_ingres",
"kellogg|the_converse_natural_language_data_management_system:_current_status_and_plans",
"thompson|rel:_a_rapidly_extensible_language_system",
"aho|the_theory_of_parsing,_translation,_and_compiling",
"simmons|answering_english_questions_by_computer:_a_survey"
],
"title": [
"Semantic database mapping in EUFID",
"EUFID: A Friendly and Flexible Front-End for Data Management Systems",
"The ROBOT System: Natural language processing applied to data base query",
"An English language question answering system for a large relational database",
"Developing a natural language interface to complex data",
"Natural language interfaces",
"Semantic grammar: an engineering technique for constructing natural language understanding systems",
"The design and implementation of INGRES",
"The converse natural language data management system: current status and plans",
"REL: A Rapidly Extensible Language system",
"The Theory of Parsing, Translation, and Compiling",
"Answering English questions by computer: a survey"
],
"abstract": [
"The End-User Friendly Interface to Data Management (EUFID) is a processing system of programs which permits users to query a database in a natural English-like way. The EUFID system translates the user's question into a query expressed in the query language of the target DataBase Management System (DBMS). EUFID makes use of two very different views of the applications data: that of the users, and that of the DBMS. This paper describes the mapping of query statements from one view to the other. Mapping is discussed in general terms as well as in terms of the specific algorithms of EUFID. Examples are given.",
"EUFID is a natural language frontend for data management systems. It is modular and table driven so that it can be interfaced to different applications and data management systems. It allows a user to query his data base in natural English, including sloppy syntax and misspellings. The tables contain a data management system view of the data base, a semantic/syntactic view of the application, and a mapping from the second to the first.",
"In the early 1970's the natural language processing techniques developed within the field of artificial intelligence (AI) made important progress. Within certain restricted micro worlds of discourse it became possible to process a reasonably large class of English. These techniques have now been applied to the real micro world of data base query, allowing for information to be extracted from data bases by asking ordinary English questions. This paper discusses the importance of true natural language data base query and describes the ROBOT system, a high performance production level system already installed in several real world environments. The specific data structure requirements of the ROBOT system are discussed, as well as an extended type of data inversion that provides precisely the functionality required by the natural language parser.",
"By typing requests in English, casual users will be able to obtain explicit answers from a large relational database of aircraft flight and maintenance data using a system called PLANES. The design and implementation of this system is described and illustrated with detailed examples of the operation of system components and examples of overall system operation. The language processing portion of the system uses a number of augmented transition networks, each of which matches phrases with a specific meaning, along with context registers (history keepers) and concept case frames; these are used for judging meaningfulness of questions, generating dialogue for clarifying partially understood questions, and resolving ellipsis and pronoun reference problems. Other system components construct a formal query for the relational database, and optimize the order of searching relations. Methods are discussed for handling vague or complex questions and for providing browsing ability. Also included are discussions of important issues in programming natural language systems for limited domains, and the relationship of this system to others.",
"Aspects of an intelligent interface that provides natural language access to a large body of data distributed over a computer network are described. The overall system architecture is presented, showing how a user is buffered from the actual database management systems (DBMSs) by three layers of insulating components. These layers operate in series to convert natural language queries into calls to DBMSs at remote sites. Attention is then focused on the first of the insulating components, the natural language system. A pragmatic approach to language access that has proved useful for building interfaces to databases is described and illustrated by examples. Special language features that increase system usability, such as spelling correction, processing of incomplete inputs, and run-time system personalization, are also discussed. The language system is contrasted with other work in applied natural language processing, and the system's limitations are analyzed.",
"As you can see by the thickness of this issue, the response to my request for contributions was overwhelming - I received 52 separate items! Since the contributions arrived over a period of time, and since we had to use archaic means (typing, cutting and pasting) to produce this newsletter, the articles are not in optimal order. I hope that the index below will help untangle the issue.",
"One of the major stumbling blocks to more effective used computers by naive users is the lack of natural means of communication between the user and the computer system. This report discusses a paradigm for constructing efficient and friendly man-machine interface systems involving subsets of natural language for limited domains of discourse. As such this work falls somewhere between highly constrained formal language query systems and unrestricted natural language under-standing systems. The primary purpose of this research is not to advance our theoretical under-standing of natural language but rather to put forth a set of techniques for embedding both semantic/conceptual and pragmatic information into a useful natural language interface module. Our intent has been to produce a front end system which enables the user to concentrate on his problem or task rather than making him worry about how to communicate his ideas or questions to the machine.",
"The currently operational (March 1976) version of the INGRES database management system is described. This multiuser system gives a relational view of data, supports two high level nonprocedural data sublanguages, and runs as a collection of user processes on top of the UNIX operating system for Digital Equipment Corporation PDP 11/40, 11/45, and 11/70 computers. Emphasis is on the design decisions and tradeoffs related to (1) structuring the system into processes, (2) embedding one command language in a general purpose programming language, (3) the algorithms implemented to process interactions, (4) the access methods implemented, (5) the concurrency and recovery control currently provided, and (6) the data structures used for system catalogs and the role of the database administrator.\nAlso discussed are (1) support for integrity constraints (which is only partly operational), (2) the not yet supported features concerning views and protection, and (3) future plans concerning the system.",
"This paper presents an overview of research in progress in which the principal aim is the achievement of more natural and expressive modes of on-line communication with complexly structured data bases. A natural-language compiler has been constructed that accepts sentences in a user-extendable English subset, produces surface and deep-structure syntactic analyses, and uses a network of concepts to construct semantic interpretations formalized as computable procedures. The procedures are evaluated by a data management system that updates, modifies, and searches data bases that can be formalized as finite models of states of affairs. The system has been designed and programmed to handle large vocabularies and large collections of facts efficiently. Plans for extending the research vehicle to interface with a deductive inference component and a voice input-output effort are briefly described.",
"In the first two sections of this paper we review the design philosophy which gives rise to these features, and sketch the system architecture which reflects them. Within this framework, we have sought to provide languages which are natural for typical users. The third section of this paper outlines one such application language, REL English.\n The REL system has been implemented at the California Institute of Technology, and will be the conversational system for the Caltech campus this fall. The system hardware consists of an IBM 360/50 computer with 256K bytes of core, a drum, IBM 2314 disks, an IBM 2250 display, 62 IBM 2741 typewriter consoles distributed around the campus, and neighboring colleges. Base languages provided are CITRAN (similar to RAND's JOSS), and REL English. A basic statistical package and a graphics package are also available for building special purpose languages around specific courses and user requirements.",
"From volume 1 Preface (See Front Matter for full Preface) \n \nThis book is intended for a one or two semester course in compiling theory at the senior or graduate level. It is a theoretically oriented treatment of a practical subject. Our motivation for making it so is threefold. \n \n(1) In an area as rapidly changing as Computer Science, sound pedagogy demands that courses emphasize ideas, rather than implementation details. It is our hope that the algorithms and concepts presented in this book will survive the next generation of computers and programming languages, and that at least some of them will be applicable to fields other than compiler writing. \n \n(2) Compiler writing has progressed to the point where many portions of a compiler can be isolated and subjected to design optimization. It is important that appropriate mathematical tools be available to the person attempting this optimization. \n \n(3) Some of the most useful and most efficient compiler algorithms, e.g. LR(k) parsing, require a good deal of mathematical background for full understanding. We expect, therefore, that a good theoretical background will become essential for the compiler designer. \n \nWhile we have not omitted difficult theorems that are relevant to compiling, we have tried to make the book as readable as possible. Numerous examples are given, each based on a small grammar, rather than on the large grammars encountered in practice. It is hoped that these examples are sufficient to illustrate the basic ideas, even in cases where the theoretical developments are difficult to follow in isolation. \n \nFrom volume 2 Preface (See Front Matter for full Preface) \n \nCompiler design is one of the first major areas of systems programming for which a strong theoretical foundation is becoming available. Volume I of The Theory of Parsing, Translation, and Compiling developed the relevant parts of mathematics and language theory for this foundation and developed the principal methods of fast syntactic analysis. Volume II is a continuation of Volume I, but except for Chapters 7 and 8 it is oriented towards the nonsyntactic aspects of compiler design. \n \nThe treatment of the material in Volume II is much the same as in Volume I, although proofs have become a little more sketchy. We have tried to make the discussion as readable as possible by providing numerous examples, each illustrating one or two concepts. \n \nSince the text emphasizes concepts rather than language or machine details, a programming laboratory should accompany a course based on this book, so that a student can develop some facility in applying the concepts discussed to practical problems. The programming exercises appearing at the ends of sections can be used as recommended projects in such a laboratory. Part of the laboratory course should discuss the code to be generated for such programming language constructs as recursion, parameter passing, subroutine linkages, array references, loops, and so forth.",
"Fifteen experimental English language question-answering I systems which are programmed and operating are described ) arid reviewed. The systems range from a conversation machine ~] to programs which make sentences about pictures and systems s~ which translate from English into logical calculi. Systems are ~ classified as list-structured data-based, graphic data-based, ~! text-based and inferential. Principles and methods of opera~4 tions are detailed and discussed. It is concluded that the data-base question-answerer has > passed from initial research into the early developmental ~.4 phase. The most difficult and important research questions for ~i~ the advancement of general-purpose language processors are seen to be concerned with measuring meaning, dealing with ambiguities, translating into formal languages and searching large tree structures."
],
"authors": [
{
"name": [
"John F. Burger"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"M. Templeton"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"L. R. Harris"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"D. Waltz"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"G. Hendrix",
"E. Sacerdoti",
"Daniel Sagalowicz",
"Jonathan Slocum"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"D. Waltz"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"R. J. H. Scha"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"M. Stonebraker",
"E. Wong",
"Peter Kreps",
"Gerald Held"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"Charles Kellogg",
"J. Burger",
"T. Diller",
"Kenneth Fogt"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"F. B. Thompson",
"P. Lockemann",
"Bozena Henisz-Dostert",
"R. S. Deverill"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"A. Aho",
"J. Ullman"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"R. F. Simmons"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
}
]
}
],
"arxiv_id": [
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null
],
"s2_corpus_id": [
"15812459",
"18957176",
"1732335",
"18227465",
"15391397",
"9557135",
"263227606",
"1514658",
"10172580",
"14782642",
"60775129",
"17660655"
],
"intents": [
[],
[],
[],
[
"methodology"
],
[
"background"
],
[
"background"
],
[
"background"
],
[],
[],
[],
[
"methodology"
],
[
"methodology"
]
],
"isInfluential": [
false,
false,
false,
false,
false,
false,
false,
false,
false,
false,
false,
false
]
} | - Problem: The paper addresses the challenges involved in developing a Natural-Language Interface (NLI) system, specifically focusing on application, natural-language, and database management problems within the EUFID project.
- Solution: The hypothesis of the paper is to design and implement an NLI system that is application-independent, database-independent, capable of fast response times, able to handle nonstandard or poorly-formed questions, portable to various machines, and able to handle syntactic information only when necessary to resolve ambiguity. | 504 | 0.134921 | null | null | null | null | null | null | null | null |
94d6add8f49a9ddcc36cf3096adb652154f70501 | 507186 | null | The Fitted Parse: 100{\%} Parsing Capability in a Syntactic Grammar of {E}nglish | A technique is described for performing fitted parsing. After the rules of a more conventional syntactic grammar are unable to produce a parse for an input string, this technique can be used to produce a reasonable approximate parse that can serve as input to the remaining stages of processing. The paper describes how fitted parsing is done in the EP[STLE system and discusses how it can help in dealing with many difficult problems of natural language analysis. | {
"name": [
"Jensen, Karen and",
"Heidorn, George E."
],
"affiliation": [
null,
null
]
} | null | null | First Conference on Applied Natural Language Processing | 1983-02-01 | 14 | 34 | null | The EPISTLE project has as its long-range goal the machine processing of natural language text in an office environment. Ultimately we intend to have software that will be able to parse and understand ordinary prose documents (such as those that an office principal might expect his secretary to cope with), and will be able to generate at least a first draft of a business letter or memo. Our current goal is a system for critiquing written material on points of grammar and style.Our grammar is written in NLP (Heidorn 1972) . an augmented phrase structure language which is implemented in LISP/370. The EPISTLE grammar currently uses syntactic, but not semantic, information. Access to an on-line standard dictionary with about 130.000 entries, including part-of-speech and some other syntactic information (such as transitivity of verbs), makes the system's vocabulary essentially unlimited. We test and improve the grammar by regularly running it on a data base of 2254 sentences from 411 actual business letters.Most of these sentences are rather complicated; the longest contains 63 words, and the average length is 19.2 words.Since the subset of English which is represented in business documents ,s very large, we need a very comprehensive grammar and robust parser. In the course of this work we have developed some new techniques to help deal with the refractory nature of natural language syntax. In this paper we discuss one such technique: the fitted parse, which guarantees the production of a reasonable parse tree for any string, no matter how unorthodox that string may be. The parse which is produced by fimng might not be perfect; but it will always be reasonable and useful, and will allow for later refinement by semantic processing.There is a certain perception of parsing that leads to the development of techniques like this one: namely, that trying to write a grammar to describe explicitly all and only the senfences of a natural language is about as practical as trying to find the Holy Grail. Hot only will the effort expended be Herculean, it will be doomed to failure. Instead we take a heuristic approach and consider that a natural language parser can be divided into three parts: (a) a set of rules, called the core grammor, that precisely define the central, agreed-upon grammatical structures of a language;(b) peripheral procedures that handle parsing ambiguity:when the core grammar produces more than one parse, these procedures decide which of the multiple parses is to be preferred;(c) peripheral procedures that handle parsing failure: when the core grammar cannot define an acceptable parse, these procedures assign some reasonable structure tO the input.In EPISTLE, (a) the core grammar consists at present of a set of about 300 syntax rules; (b) ambiguity is resolved by using a metric that ranks alternative parses (Heidorn 1982): and (c) parse failure is handled by the fitting procedure described here.[n using the terms core grammar and periphery we are consciously echoing recent work in generative grammar, but we are applying the terms in a somewhat different way. Core grammar, in current linguistic theory, suggests the notion of a set of very general rules which define universal properties of human language and effectively set limits on the types of grammars that any particular language may have; periphery phenomena are those constructions which are peculiar to particular languages and which require added rules beyond what the core grammar will provide (Lasnik and Freidin 1981 ) Our current work is not concerned with the meta-ruies of a Universal Grammar. But we have found that a distinction between core and periphery is useful even within a grammar of a panicular language ~ in this case, English.This paper first reviews parsing in EPISTLE, and then describes the fitting procedure, followed by several examples of its application. Then the benefits of parse fitting and the results of using it in our system are discussed, followed by its relation to other work.EPISTLE's parser is written in the NLP programming language, which works with augmented phrase structure rules and with attribute-value records, which are manipulated by the rules. When NLP is used to parse natural language text, the records describe constituents, and the rules put these constituents together to form ever larger constituent (or record) structures. Records contain all the computational and linguistic information associated with words, with larger constituents, and with the parse formation. At this time our grammar is sentence-based; we do not, for instance, create record structures to describe paragraphs. Details of the EPISTLE system and of its core grammar may be found in Miller et al., 1981, and Heidorn et al., 1982. A close examination of parse trees produced by the core grammar will often reveal branch attachments that are not quite right: for example, semantically incongruous prepositional phrase attachments.In line with our pragmatic parsing philosophy, our core grammar is designed to produce unique approximate parses. (Recall that we currently have access only to syntactic and morphological information about constituents.) In the cases where semantic or pragmatic information is needed before a proper attachment can be made, rather than produce a confusion of multiple parses we force the grammar to try to assign a single parse. This is usually done by forcing some attachments to be made to the closest, or rightmost, available constituent. This strategy only rarely impedes the type of grammar-checking and style-checking that we are working on. And we feel that a single parse with a consistent attachment scheme will yield much more easily to later semantic processing than would a large number of different structures.The rules of the core grammar (CG) produce single approximate parses for the largest percentage of input text. The CG can always be improved and its coverage extended; work on improving the EPISTLE CG is continual. But the coverage of a core grammar will never reach 100%. Natural language is an organic symbol system; it does not submit to cast-iron control. For those strings that cannot be fully parsed by rules of the core grammar we use a heuristic best fit procedure that produces a reasonable parse structure.The fitting procedure begins after the CG rules have been applied in a bottom-up, parallel fashion, but have failed to produce an S node that covers the string. At this point, as a by-product of bottom-up parsing, records are available for inspection that describe the various segments of the input string from many perspectives, according to the rules that have been applied. The term fitting has to do with selecting and fitting these pieces of the analysis together in a reasonable fashion.The algorithm proceeds in two main stages: first, a head constituent is chosen; next, remaining constituents are fitted in.In our current implementation, candidates for the head are tested preferentially as follows, from most to least desirable:(a) VPs with tense and subject; (b) VPs with tense but no subject: (c) segments other than VP: (d) untensed VPs. If more than one candidate is found in any category, the one preferred is the widest (covering most text). If there is a tie for widest, the leftmost of those is preferred. [f there is a tie for leftmost, the one with the best value for the parse metric is chosen. If there is still a tie (a very unlikely case), an arbitrary choice is made. (Note that we consider a VP to be any segment of text that has a verb as its head element.)The fitting process is complete if the head constituent covers the entire input string (as would be the case if the string contained just a noun phrase, for example, "Salutations and congratulations"). If the head constituent does not cover the entire string, remaining constituents are added on either side. with the following order of preference:(a) segments other than VP; (b) untensed VPs: (c) tensed VPs. As with the choice of head. the widest candidate is preferred at each step. The fit moves outward from the head. both leftward to the beginning of the string, and rightward to the end. until the entire input string has been fitted into a best approximate parse tree. The overall effect of the fitting process is to select the largest chunk of sentence-like material within a text string and consider it to be central, with left-over chunks of text attached in some reasonable manner.As a simple example, consider this text string which appeared in one of our EPfSTLE data base letters:"Example: 75 percent of $250.00 is $187.50."Because this string has a capitalized first word and a period at its end. it is submitted to the core grammar for consideration as a sentence. But it is not a sentence, and so the CG will fail to arrive at a completed parse. However. during processing. the CG will have assigned many structures to its many substrings. Looking for a head constituent among these structures, the fitting procedure will first seek VPs with tense and subject.Several are present: "$250.00 is". "percent of $250.00 is", "$250.00 is $187.50". and so on. The widest and leftmost of these VP constituents is the one which covers the string "75 percent of $250.00 is $187.50", so it will be chosen as head.The fitting process then looks for additional constituents to the left, favoring ones other than VP. [t finds first the colon, and then the word "Example"In this ~tring the only constituent following the head is the final period, which is duly added. The complete fitted parse is shown in Figure I .The form of parse tree used here shows the top-down structure of the string from left to right, with the terminal nodes being the last item on each line. At each level of the tree (in a vertical column), the head element of a constituent is marked with an asterisk. The other elements above and below are pre-and post-modifiers. The highest element of the trees shown here is FITTED, rather than the more usual SENT. (It is important to remember that these parse diagrams are only shorthand representations for the NLP record structures, which contain an abundance of information about the string processed.)The tree of Figure I . which would be lost if we restricted ourselves to the precise rules of the core grammar, is now available for examination, for grammar and style checking, and ultimately for semantic interpretation, It can take its place tn the stream of continuous text and be analyzed for what it is a sentence fragment, interpretable only by reference to other sentences in context.. The fitted parse approach can help to deal with many difficult natural language problems, including fragments, difficult cases of ellipsis, proliferation of rules to handle single phenomena, phenomena for which no rule seems adequate, and punctuation horrors.Each of these is discussed here with examples.Fragments. There are many of these in running text; they are frequently NPs, as in Figure 2 . and include common greetings. farewells, and sentiments. (N.b., all examples in this paper are taken from the EPISTLE data base.)Difficult cases of ellipsis. In the sentence of Figure 3 , what we really have at a semantic level is a conjunction of two propositions which, if generated directly, would read: " (a) the proper analysis of this sentence would be obscured: some pieces --namely, the inferred concepts --are missing from the second part of the surface sentence;(b) the linguistic generalization would be lost: any two conjoined propositions can undergo deletion of identical (recoverable) elements.A fitted parse such as Figure 3 allows us to inspect the main clause for syntactic and stylistic deviances, and at the same time makes clear the breaking point between the two proposttions and opens the door for a later semantic processing of the elided elements.Proliferation of rules to handle single phenomena. There are some English constructions which, although they have a fairly simple and unitary form, do not hold anything like a unitary ordering relation within clause boundaries. The vocative is one of these: (a) Bit/. I've been asked to clarify the enclosed letter. Rules could be written that would explicitly allow the placement of a proper name. surrounded by commas, at different positions in the sentence ~ a different rule for each position. But this solution Lacks elegance, makes a simple phenomenon seem complicated, and always runs the risk of overlookmg yet one more position where some other writer might insert a vocative. The parse fitting procedure provides an alternative that preserves the integrity of the main clause and adds the vocative at a break in the structure, which is where it belongs. as shown in Figure 4 . Other similar phenomena, such as par-entheticaI expressions, can be handled in this same fashion.Phenomena for which no rule seems adequate. The sentence "Good luck to you and yours and l wish you the very best in your future efforts." is. on the face of it. a conjunction of a noun phrase (or NP plus PP) with a finite verb phrase. Such constructions are not usually considered to he fully grammatical, and a core grammar which contained a rule describing this construction ought probably to be called a faulty grammar. Nevertheless, ordinary English correspondence abounds with strings of this sort. and readers have no difficulty construing them. The fitted parse for this sentence in Figure 5 presents the finite clause as its head and adds the remaining constituents in a reasonable fashion. From this structure later semantic processing could infer that "Good luck to you and yours" really means "1 express/send/wish good luck to you and yours" --a special case of formalized, ritualized ellipsis.Punctuation horrors. In any large sample of natural language text, there will be many irregularities of punctuation which, although perfectly understandable to readers, can completely disable an explicit computational grammar. In business text these difficulties are frequent. Some can he caught and corrected by punctuation checkers and balancers. But others cannot, sometimes because, for all their trickiness, they ~tre not really wrong. Yet few grammarians would care to dignify, by describing it with rules of the core grammar, a text string like:"Options: Al-(Transmitter Clocked by Dataset) B3-(without the 605 Recall Unit) CS-(with ABC Ring Indicator) D8-twithout Auto Answer) EI0-(Auto Ring Selective)." Our parse fitting procedure handles this example by building a string of NPs separated with punctuation marks, as shown in Figure 6 . This solution at least enables us to get a handle on the contents of the string. There are two main benefits to be gained from using the fitted parse approach. First, it allows for syntactic processing --for our purposes, grammar and style checking --to proceed tn the absence of a perfect parse. Second, it provides a promising structure to submit to later semantic processing routines. And parenthetically, a fitted parse diagram is a great aid to rule debugging. The place where the first break occurs between the head constituent and its pre-or post-modifiers usually indicates fairly precisely where the core grammar failed.It should be emphasized that a fitting procedure cannot be used as a substitute for explicit rules, and that it in no way lessens the importance of the core grammar. There is a tight interaction between the two components. The success of the fitted parse depends on the accuracy and completeness of the core rules; a fit is only as good as its grammar.In December of 1981. the EPISTLE grammar, which at that time consisted of about 250 grammar rules and did not include the fitted parsing technique, was run on the data base of ?.254 sentences from business letters of various types, The input corpus was very raw: it had not been edited for spelling or other typing errors, nor had it been manipulated in any way that might have made parsing easier.At that time the system failed to parse 832. or 36%, of the input sentences.(It gave single parses for 41°%. double parses for lit, , and 3 or more parses for 12°'o.) Then we added the fitting procedure and also worked to improve the core grammar.Concentrating only on those 832 sentences which in December failed to parse, we ran the grammar again in July, 1982, on a subset of 163 of them. This time the number of core grammar rules was 300. Where originally the CG could parse none of these 163 sentences, this time it yielded parses (mostly single or double) for 109 of them. The remaining 54 were handled by the fitting procedure.Close analysis of the 54 fitted parses revealed that 14 of these sentences bypass the core grammar simply because of missing dictionary information: for example, the CG contains a rule to parse ditransitive VPs (indirect object-taking VPs .97 with verbs like "give" or "send"), but that rule will not apply if the verb is not marked as ditransitive. The EPISTLE dictionary will eventually have all ditransitive verbs marked properly, but right now it does not.Removing those 14 sentences from consideration, we are left with a residue of 40 strings, or about 25% of the 163 sentences, which we expect always to handle by means of the fitted parse. These strings include all of the problem types mentioned above (fragments, ellipsis, etc.), and the fitted parses produced were adequate for our purposes. It is not yet clear how this 25% might extrapolate to business text at large, but it seems safe to say that there will always be a significant percentage of natural business correspondence which we cannot expect to parse with the core grammar, but which responds nicely to peripheral processing techniques like those of the fitted parse. (A more recent run of the entire data base resulted m 27% fitted parses.)Although we know of no approach quite like the one described here, other related work has been done. Most of this work suggests that unparsable or ill-formed input should be handled by relaxation techniques, i.e., by relaxing restrictions in the grammar rules in some principled way. This is undoubtedly a useful strategy --one which EPISTLE makes use of, in fact, in its rules for detecting grammatical errors (Heidorn et al. 1982) . However. it is questionable whether such a strategy can ultimately succeed in the face of the overwhelming (for all practical purposes, infinite) variety of illformedness with which we are faced when We set out to parse truly unrestricted natural language input. If all ili-formedness is rule-based (Weischedel and Sondheimer 1981, p. 3) , it can only be by some very loose definition of the term rule, such as that which might apply to the fitting algorithm described here.Thus Weischedel and Black, 1980 , suggest three techniques for responding intelligently to unparsable inputs:(al using presuppositions to determine user assumptions; this course is not available to a syntactic grammar like EPISTLE's;Ibl using relaxation techniques;(cJ supplying the user with information about the point where the parse blocked; this would require an interactive environment, which would not be possible for every type of natural language processing application. Kwasny and Sondheimer. 1981 . are strongproponents of relaxation techniques, which they use to handle both cases of clearly ungrammatical structures, such as co-occurrence viola-r~ons like subject/verb disagreement, and cases of perfectly acceptable but difficult constructions (ellipsis and conjunction). Weischedel and Sondheimer. 1982 . describe an improved ellipsis processor. No longer is ellipsis handled with relaxation techniques, but by predicting transformatwns of previous parsing paths which would allow for the matching of fragments with plausible contexts. This plan would be appropriate as a next step after the fitted parse, but it does not guarantee a parse for all elided inputs. Hayes and Mouradian, 1981 . also use the relaxation method. They achieve flexibility in their parser by relaxing con-sistency constraints (grammatical restrictions, like Kwasny and Sondheimer's co-occurrence violations) and also by relaxing ordering constraints.However. they are working with a restricted-domain semantic system and their approach, as they admit, "does not embody a solution for flexible parsing of natural language in general" (p. 236).The work of WilLS is heavily semantic and therefore quite different from EPISTLE, but his general philosophy meshes nicely with the philosophy of the fitted parse: "It is proper to prefer the normal...but it would be absurd...not to accept the abnormal if it is described" (WilLs 1975, p. 267) . WilLS" approach to machine translation which involves doing some amount of the translation on a phrase-by-phrase basis is relevant here. too, With fitted parsing, it might be possible to get usable translations for strings that cannot be completely parsed with the core grammar by translating each phrase of the fitted parse separately. | null | null | null | null | Main paper:
introduction:
The EPISTLE project has as its long-range goal the machine processing of natural language text in an office environment. Ultimately we intend to have software that will be able to parse and understand ordinary prose documents (such as those that an office principal might expect his secretary to cope with), and will be able to generate at least a first draft of a business letter or memo. Our current goal is a system for critiquing written material on points of grammar and style.Our grammar is written in NLP (Heidorn 1972) . an augmented phrase structure language which is implemented in LISP/370. The EPISTLE grammar currently uses syntactic, but not semantic, information. Access to an on-line standard dictionary with about 130.000 entries, including part-of-speech and some other syntactic information (such as transitivity of verbs), makes the system's vocabulary essentially unlimited. We test and improve the grammar by regularly running it on a data base of 2254 sentences from 411 actual business letters.Most of these sentences are rather complicated; the longest contains 63 words, and the average length is 19.2 words.Since the subset of English which is represented in business documents ,s very large, we need a very comprehensive grammar and robust parser. In the course of this work we have developed some new techniques to help deal with the refractory nature of natural language syntax. In this paper we discuss one such technique: the fitted parse, which guarantees the production of a reasonable parse tree for any string, no matter how unorthodox that string may be. The parse which is produced by fimng might not be perfect; but it will always be reasonable and useful, and will allow for later refinement by semantic processing.There is a certain perception of parsing that leads to the development of techniques like this one: namely, that trying to write a grammar to describe explicitly all and only the senfences of a natural language is about as practical as trying to find the Holy Grail. Hot only will the effort expended be Herculean, it will be doomed to failure. Instead we take a heuristic approach and consider that a natural language parser can be divided into three parts: (a) a set of rules, called the core grammor, that precisely define the central, agreed-upon grammatical structures of a language;(b) peripheral procedures that handle parsing ambiguity:when the core grammar produces more than one parse, these procedures decide which of the multiple parses is to be preferred;(c) peripheral procedures that handle parsing failure: when the core grammar cannot define an acceptable parse, these procedures assign some reasonable structure tO the input.In EPISTLE, (a) the core grammar consists at present of a set of about 300 syntax rules; (b) ambiguity is resolved by using a metric that ranks alternative parses (Heidorn 1982): and (c) parse failure is handled by the fitting procedure described here.[n using the terms core grammar and periphery we are consciously echoing recent work in generative grammar, but we are applying the terms in a somewhat different way. Core grammar, in current linguistic theory, suggests the notion of a set of very general rules which define universal properties of human language and effectively set limits on the types of grammars that any particular language may have; periphery phenomena are those constructions which are peculiar to particular languages and which require added rules beyond what the core grammar will provide (Lasnik and Freidin 1981 ) Our current work is not concerned with the meta-ruies of a Universal Grammar. But we have found that a distinction between core and periphery is useful even within a grammar of a panicular language ~ in this case, English.This paper first reviews parsing in EPISTLE, and then describes the fitting procedure, followed by several examples of its application. Then the benefits of parse fitting and the results of using it in our system are discussed, followed by its relation to other work.EPISTLE's parser is written in the NLP programming language, which works with augmented phrase structure rules and with attribute-value records, which are manipulated by the rules. When NLP is used to parse natural language text, the records describe constituents, and the rules put these constituents together to form ever larger constituent (or record) structures. Records contain all the computational and linguistic information associated with words, with larger constituents, and with the parse formation. At this time our grammar is sentence-based; we do not, for instance, create record structures to describe paragraphs. Details of the EPISTLE system and of its core grammar may be found in Miller et al., 1981, and Heidorn et al., 1982. A close examination of parse trees produced by the core grammar will often reveal branch attachments that are not quite right: for example, semantically incongruous prepositional phrase attachments.In line with our pragmatic parsing philosophy, our core grammar is designed to produce unique approximate parses. (Recall that we currently have access only to syntactic and morphological information about constituents.) In the cases where semantic or pragmatic information is needed before a proper attachment can be made, rather than produce a confusion of multiple parses we force the grammar to try to assign a single parse. This is usually done by forcing some attachments to be made to the closest, or rightmost, available constituent. This strategy only rarely impedes the type of grammar-checking and style-checking that we are working on. And we feel that a single parse with a consistent attachment scheme will yield much more easily to later semantic processing than would a large number of different structures.The rules of the core grammar (CG) produce single approximate parses for the largest percentage of input text. The CG can always be improved and its coverage extended; work on improving the EPISTLE CG is continual. But the coverage of a core grammar will never reach 100%. Natural language is an organic symbol system; it does not submit to cast-iron control. For those strings that cannot be fully parsed by rules of the core grammar we use a heuristic best fit procedure that produces a reasonable parse structure.The fitting procedure begins after the CG rules have been applied in a bottom-up, parallel fashion, but have failed to produce an S node that covers the string. At this point, as a by-product of bottom-up parsing, records are available for inspection that describe the various segments of the input string from many perspectives, according to the rules that have been applied. The term fitting has to do with selecting and fitting these pieces of the analysis together in a reasonable fashion.The algorithm proceeds in two main stages: first, a head constituent is chosen; next, remaining constituents are fitted in.In our current implementation, candidates for the head are tested preferentially as follows, from most to least desirable:(a) VPs with tense and subject; (b) VPs with tense but no subject: (c) segments other than VP: (d) untensed VPs. If more than one candidate is found in any category, the one preferred is the widest (covering most text). If there is a tie for widest, the leftmost of those is preferred. [f there is a tie for leftmost, the one with the best value for the parse metric is chosen. If there is still a tie (a very unlikely case), an arbitrary choice is made. (Note that we consider a VP to be any segment of text that has a verb as its head element.)The fitting process is complete if the head constituent covers the entire input string (as would be the case if the string contained just a noun phrase, for example, "Salutations and congratulations"). If the head constituent does not cover the entire string, remaining constituents are added on either side. with the following order of preference:(a) segments other than VP; (b) untensed VPs: (c) tensed VPs. As with the choice of head. the widest candidate is preferred at each step. The fit moves outward from the head. both leftward to the beginning of the string, and rightward to the end. until the entire input string has been fitted into a best approximate parse tree. The overall effect of the fitting process is to select the largest chunk of sentence-like material within a text string and consider it to be central, with left-over chunks of text attached in some reasonable manner.As a simple example, consider this text string which appeared in one of our EPfSTLE data base letters:"Example: 75 percent of $250.00 is $187.50."Because this string has a capitalized first word and a period at its end. it is submitted to the core grammar for consideration as a sentence. But it is not a sentence, and so the CG will fail to arrive at a completed parse. However. during processing. the CG will have assigned many structures to its many substrings. Looking for a head constituent among these structures, the fitting procedure will first seek VPs with tense and subject.Several are present: "$250.00 is". "percent of $250.00 is", "$250.00 is $187.50". and so on. The widest and leftmost of these VP constituents is the one which covers the string "75 percent of $250.00 is $187.50", so it will be chosen as head.The fitting process then looks for additional constituents to the left, favoring ones other than VP. [t finds first the colon, and then the word "Example"In this ~tring the only constituent following the head is the final period, which is duly added. The complete fitted parse is shown in Figure I .The form of parse tree used here shows the top-down structure of the string from left to right, with the terminal nodes being the last item on each line. At each level of the tree (in a vertical column), the head element of a constituent is marked with an asterisk. The other elements above and below are pre-and post-modifiers. The highest element of the trees shown here is FITTED, rather than the more usual SENT. (It is important to remember that these parse diagrams are only shorthand representations for the NLP record structures, which contain an abundance of information about the string processed.)The tree of Figure I . which would be lost if we restricted ourselves to the precise rules of the core grammar, is now available for examination, for grammar and style checking, and ultimately for semantic interpretation, It can take its place tn the stream of continuous text and be analyzed for what it is a sentence fragment, interpretable only by reference to other sentences in context.. The fitted parse approach can help to deal with many difficult natural language problems, including fragments, difficult cases of ellipsis, proliferation of rules to handle single phenomena, phenomena for which no rule seems adequate, and punctuation horrors.Each of these is discussed here with examples.Fragments. There are many of these in running text; they are frequently NPs, as in Figure 2 . and include common greetings. farewells, and sentiments. (N.b., all examples in this paper are taken from the EPISTLE data base.)Difficult cases of ellipsis. In the sentence of Figure 3 , what we really have at a semantic level is a conjunction of two propositions which, if generated directly, would read: " (a) the proper analysis of this sentence would be obscured: some pieces --namely, the inferred concepts --are missing from the second part of the surface sentence;(b) the linguistic generalization would be lost: any two conjoined propositions can undergo deletion of identical (recoverable) elements.A fitted parse such as Figure 3 allows us to inspect the main clause for syntactic and stylistic deviances, and at the same time makes clear the breaking point between the two proposttions and opens the door for a later semantic processing of the elided elements.Proliferation of rules to handle single phenomena. There are some English constructions which, although they have a fairly simple and unitary form, do not hold anything like a unitary ordering relation within clause boundaries. The vocative is one of these: (a) Bit/. I've been asked to clarify the enclosed letter. Rules could be written that would explicitly allow the placement of a proper name. surrounded by commas, at different positions in the sentence ~ a different rule for each position. But this solution Lacks elegance, makes a simple phenomenon seem complicated, and always runs the risk of overlookmg yet one more position where some other writer might insert a vocative. The parse fitting procedure provides an alternative that preserves the integrity of the main clause and adds the vocative at a break in the structure, which is where it belongs. as shown in Figure 4 . Other similar phenomena, such as par-entheticaI expressions, can be handled in this same fashion.Phenomena for which no rule seems adequate. The sentence "Good luck to you and yours and l wish you the very best in your future efforts." is. on the face of it. a conjunction of a noun phrase (or NP plus PP) with a finite verb phrase. Such constructions are not usually considered to he fully grammatical, and a core grammar which contained a rule describing this construction ought probably to be called a faulty grammar. Nevertheless, ordinary English correspondence abounds with strings of this sort. and readers have no difficulty construing them. The fitted parse for this sentence in Figure 5 presents the finite clause as its head and adds the remaining constituents in a reasonable fashion. From this structure later semantic processing could infer that "Good luck to you and yours" really means "1 express/send/wish good luck to you and yours" --a special case of formalized, ritualized ellipsis.Punctuation horrors. In any large sample of natural language text, there will be many irregularities of punctuation which, although perfectly understandable to readers, can completely disable an explicit computational grammar. In business text these difficulties are frequent. Some can he caught and corrected by punctuation checkers and balancers. But others cannot, sometimes because, for all their trickiness, they ~tre not really wrong. Yet few grammarians would care to dignify, by describing it with rules of the core grammar, a text string like:"Options: Al-(Transmitter Clocked by Dataset) B3-(without the 605 Recall Unit) CS-(with ABC Ring Indicator) D8-twithout Auto Answer) EI0-(Auto Ring Selective)." Our parse fitting procedure handles this example by building a string of NPs separated with punctuation marks, as shown in Figure 6 . This solution at least enables us to get a handle on the contents of the string. There are two main benefits to be gained from using the fitted parse approach. First, it allows for syntactic processing --for our purposes, grammar and style checking --to proceed tn the absence of a perfect parse. Second, it provides a promising structure to submit to later semantic processing routines. And parenthetically, a fitted parse diagram is a great aid to rule debugging. The place where the first break occurs between the head constituent and its pre-or post-modifiers usually indicates fairly precisely where the core grammar failed.It should be emphasized that a fitting procedure cannot be used as a substitute for explicit rules, and that it in no way lessens the importance of the core grammar. There is a tight interaction between the two components. The success of the fitted parse depends on the accuracy and completeness of the core rules; a fit is only as good as its grammar.In December of 1981. the EPISTLE grammar, which at that time consisted of about 250 grammar rules and did not include the fitted parsing technique, was run on the data base of ?.254 sentences from business letters of various types, The input corpus was very raw: it had not been edited for spelling or other typing errors, nor had it been manipulated in any way that might have made parsing easier.At that time the system failed to parse 832. or 36%, of the input sentences.(It gave single parses for 41°%. double parses for lit, , and 3 or more parses for 12°'o.) Then we added the fitting procedure and also worked to improve the core grammar.Concentrating only on those 832 sentences which in December failed to parse, we ran the grammar again in July, 1982, on a subset of 163 of them. This time the number of core grammar rules was 300. Where originally the CG could parse none of these 163 sentences, this time it yielded parses (mostly single or double) for 109 of them. The remaining 54 were handled by the fitting procedure.Close analysis of the 54 fitted parses revealed that 14 of these sentences bypass the core grammar simply because of missing dictionary information: for example, the CG contains a rule to parse ditransitive VPs (indirect object-taking VPs .97 with verbs like "give" or "send"), but that rule will not apply if the verb is not marked as ditransitive. The EPISTLE dictionary will eventually have all ditransitive verbs marked properly, but right now it does not.Removing those 14 sentences from consideration, we are left with a residue of 40 strings, or about 25% of the 163 sentences, which we expect always to handle by means of the fitted parse. These strings include all of the problem types mentioned above (fragments, ellipsis, etc.), and the fitted parses produced were adequate for our purposes. It is not yet clear how this 25% might extrapolate to business text at large, but it seems safe to say that there will always be a significant percentage of natural business correspondence which we cannot expect to parse with the core grammar, but which responds nicely to peripheral processing techniques like those of the fitted parse. (A more recent run of the entire data base resulted m 27% fitted parses.)Although we know of no approach quite like the one described here, other related work has been done. Most of this work suggests that unparsable or ill-formed input should be handled by relaxation techniques, i.e., by relaxing restrictions in the grammar rules in some principled way. This is undoubtedly a useful strategy --one which EPISTLE makes use of, in fact, in its rules for detecting grammatical errors (Heidorn et al. 1982) . However. it is questionable whether such a strategy can ultimately succeed in the face of the overwhelming (for all practical purposes, infinite) variety of illformedness with which we are faced when We set out to parse truly unrestricted natural language input. If all ili-formedness is rule-based (Weischedel and Sondheimer 1981, p. 3) , it can only be by some very loose definition of the term rule, such as that which might apply to the fitting algorithm described here.Thus Weischedel and Black, 1980 , suggest three techniques for responding intelligently to unparsable inputs:(al using presuppositions to determine user assumptions; this course is not available to a syntactic grammar like EPISTLE's;Ibl using relaxation techniques;(cJ supplying the user with information about the point where the parse blocked; this would require an interactive environment, which would not be possible for every type of natural language processing application. Kwasny and Sondheimer. 1981 . are strongproponents of relaxation techniques, which they use to handle both cases of clearly ungrammatical structures, such as co-occurrence viola-r~ons like subject/verb disagreement, and cases of perfectly acceptable but difficult constructions (ellipsis and conjunction). Weischedel and Sondheimer. 1982 . describe an improved ellipsis processor. No longer is ellipsis handled with relaxation techniques, but by predicting transformatwns of previous parsing paths which would allow for the matching of fragments with plausible contexts. This plan would be appropriate as a next step after the fitted parse, but it does not guarantee a parse for all elided inputs. Hayes and Mouradian, 1981 . also use the relaxation method. They achieve flexibility in their parser by relaxing con-sistency constraints (grammatical restrictions, like Kwasny and Sondheimer's co-occurrence violations) and also by relaxing ordering constraints.However. they are working with a restricted-domain semantic system and their approach, as they admit, "does not embody a solution for flexible parsing of natural language in general" (p. 236).The work of WilLS is heavily semantic and therefore quite different from EPISTLE, but his general philosophy meshes nicely with the philosophy of the fitted parse: "It is proper to prefer the normal...but it would be absurd...not to accept the abnormal if it is described" (WilLs 1975, p. 267) . WilLS" approach to machine translation which involves doing some amount of the translation on a phrase-by-phrase basis is relevant here. too, With fitted parsing, it might be possible to get usable translations for strings that cannot be completely parsed with the core grammar by translating each phrase of the fitted parse separately.
Appendix:
| null | null | null | null | {
"paperhash": [
"heidorn|the_epistle_text-critiquing_system",
"weischedel|an_improved_heuristic_for_ellipsis_processing",
"heidorn|experience_with_an_easily_computed_metric_for_ranking_alternative_parses",
"miller|text-critiquing_with_the_epistle_system:_an_author's_aid_to_better_syntax",
"kwasny|relaxation_techniques_for_parsing_grammatically_ill-formed_input_in_natural_language_understanding_systems",
"hayes|flexible_parsing",
"weischedel|responding_intelligently_to_unparsable_inputs",
"wilks|an_intelligent_analyzer_and_understander_of_english",
"heidorn|natural_language_inputs_to_a_simulation_programming_system:_an_introduction"
],
"title": [
"The EPISTLE Text-Critiquing System",
"An Improved Heuristic for Ellipsis Processing",
"Experience with an Easily Computed Metric for Ranking Alternative Parses",
"Text-critiquing with the EPISTLE system: an author's aid to better syntax",
"Relaxation Techniques for Parsing Grammatically Ill-Formed Input in Natural Language Understanding Systems",
"Flexible Parsing",
"Responding Intelligently to Unparsable Inputs",
"An intelligent analyzer and understander of English",
"Natural language inputs to a simulation programming system: An introduction"
],
"abstract": [
"The experimental EPISTLE system is intended to provide \"intelligent\" functions for processing business correspondence and other texts in an office environment. This paper focuses on the initial objectives of the system: critiquing written material on points of grammar and style. The overall system is described, with some details of the implementation, user interface, and the three levels of processing, especially the syntactic parsing of sentences with a computerized English grammar.",
"Several natural language systems (e.g., Bobrow et al., 1977; Hendrix et al., 1978; Kwasny and Sondheimer, 1979) include heuristics for replacement and repetition ellipsis, but not expansion ellipsis. One general strategy has been to substitute fragments into the analysis of the previous input, e.g., substituting parse trees of the elliptical input into the parse trees of the previous input in LIFER (Hendrix, et al., 1978). This only applies to inputs of the same type, e.g., repeated questions.",
"This brief paper, which is itself an extended abstract for a forthcoming paper, describes a metric that can be easily computed during either bottom-up or top-down construction of a parse tree for ranking the desirability of alternative parses. In its simplest form, the metric tends to prefer trees in which constituents are pushed as far down as possible, but by appropriate modification of a constant in the formula other behavior can be obtained also. This paper includes in introduction to the EPISTLE system being developed at IBM Research and a discussion of the results of using this metric with that system.",
"The experimental EPISTLE system is ultimately intended to provide office workers with intelligent applications for the processing of natural language text, particularly business correspondence. A variety of possible critiques of textual material are identified in this paper, but the discussion focuses on the system's capability to detect several classes of grammatical errors, such as disagreement in number between the subject and the verb. The system's error-detection performance relies critically on its parsing component which determines the syntactic structure of each sentence and the grammatical functions fulfilled by various phrases. Details of the system's operations are provided, and some of the future critiquing objectives are outlined.",
"This paper investigates several language phenomena either considered deviant by linguistic standards or insufficiently addressed by existing approaches. These include co-occurrence violations, some forms of ellipsis and extraneous forms, and conjunction. Relaxation techniques for their treatment in Natural Language Understanding Systems are discussed. These techniques, developed within the Augmented Transition Network (ATN) model, are shown to be adequate to handle many of these cases.",
"When people use natural language in natural settings, they often use it ungrammatically, missing out or repeating words, breaking-off and restarting, speaking in fragments, etc., Their human listeners are usually able to cope with these deviations with little difficulty. If a computer system wishes to accept natural language input from its users on a routine basis, it must display a similar indifference. In this paper, we outline a set of parsing flexibilities that such a system should provide. We go on to describe FlexP. a bottom-up pattern-matching parser that we have designed and implemented to provide these flexibilities for restricted natural language input to a limited-domain computer system.",
"All natural language systems are likely to receive inputs for which they are unprepared. The system must be able to respond to such inputs by explicitly indicating the reasons the input could not be understood, so that the user will have precise information for trying to rephrase the input. If natural language communication to data bases, to expert consultant systems, or to any other practical system is to be accepted by other than computer personnel, this is an absolute necessity.This paper presents several ideas for dealing with parts of this broad problem. One is the use of presupposition to detect user assumptions. The second is relaxation of tests while parsing. The third is a general technique for responding intelligently when no parse can be found. All of these ideas have been implemented and tested in one of two natural language systems. Some of the ideas are heuristics that might be employed by humans; others are engineering solutions for the problem of practical natural language systems.",
"The paper describes a working analysis and generation program for natural language, which handles paragraph length input. Its core is a system of preferential choice between deep semantic patterns, based on what we call “semantic density.” The system is contrasted: with syntax oriented linguistic approaches, and with theorem proving approaches to the understanding problem.",
"A simulation programming system with which models for simple queuing problems can be built through naturallanguage interaction with a computer is described. In this system the English statement of a problem is first translated into a language -independent entity-attribute-value information structure, which can then be translated back into an equivalent English description and into a GPSS simulation program for the problem. This processing is done on an IBM 360/67 by a FORTRAN program which is guided by a set of stratified decoding and encoding rules written in a grammar-rule language developed for this system. A detailed example of the use of the system is included. This task was supported by the Information Systems Program of the Office of Naval Research as Project NR 049314, under Project Order PO 1-0177. The facilities of the W.R. Church Computer Center were utilized for this research."
],
"authors": [
{
"name": [
"George E. Heidorn",
"Karen Jensen",
"L. A. Miller",
"Roy J. Byrd",
"M. Chodorow"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"R. Weischedel",
"N. Sondheimer"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"George E. Heidorn"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"L. A. Miller",
"George E. Heidorn",
"Karen Jensen"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"S. Kwasny",
"N. Sondheimer"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"P. Hayes",
"G. Mouradian"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"R. Weischedel",
"J. Black"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"Y. Wilks"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"George E. Heidorn"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
}
]
}
],
"arxiv_id": [
null,
null,
null,
null,
null,
null,
null,
null,
null
],
"s2_corpus_id": [
"12799954",
"1727772",
"13343095",
"17922808",
"181820",
"11007680",
"18828496",
"5968738",
"60214583"
],
"intents": [
[],
[],
[
"methodology",
"background"
],
[],
[],
[
"methodology"
],
[
"background"
],
[],
[
"methodology"
]
],
"isInfluential": [
false,
false,
false,
false,
false,
false,
false,
false,
true
]
} | - Problem: The paper addresses the issue of parsing natural language text that cannot be parsed by conventional syntactic grammar rules.
- Solution: The paper proposes a technique called fitted parsing, which produces a reasonable approximate parse for unorthodox input strings, allowing for further processing stages in natural language analysis. | 504 | 0.06746 | null | null | null | null | null | null | null | null |
0b5d092192239d5ec9c42e6d06d317cfa63f5834 | 9036434 | null | Automatic Representation of the Semantic Relationships Corresponding to a {F}rench Surface Expression | The work presented here is a preliminary sn~y concerning the automatic translation of French natural language statements into the RESEDA semantic metalanguage. The text in natural language is first (pre)processed in order to obtain its syntactic structure. The "semantic parsing" process begins with marking the "triggers", defined as lexical units which call one or more of the predicative patterns allowed for in the metalanguage. The patterns obtained are then merged, and their case slots filled with the elements found in the surface structure according to the predictions associated with the slots. | {
"name": [
"Zarri, Gian Piero"
],
"affiliation": [
null
]
} | null | null | First Conference on Applied Natural Language Processing | 1983-02-01 | 12 | 24 | null | The work that I intend to presen~ here is a preliminary study concerning the automatic translation of French natural language statements into the RESEDA semantic language.The RESEDA project itself is concerned with the creation and practical exploltation of a system for managing a biographical database using Artiflcial Intelligence (AI) techniques. The term "biographical data" must be understood in its widest possible sense : being in fact any event, in the public or private Ills, physical or intellectual, etc., that it is possible to gather about the personages we are interested in.In the present state of the system, this information concerns a well-defined period in time (approximately between 1350 and 1450) and a particular subject area (French history), but we are now working on the adaptation of RESEDA's methodology to the processing of other biographical data, for example medical or legal data. RESEDA differs from "classical" factual database management systems in two ways:-The information is recorded in the base using a particular Data Definition Language (metalanguage) which uses knowledge representation techniques.-A user interrogating the base obtains not only information which has been directly introduced This research is Jointly financsd by the "Agmnce de l'Informatique -A.D.I." (CNRS/ADI contract n ° 507568) and the "Centre National de la Recherche Scientifique -C.N.R.S." (ATP n ° 955045).into it, but also "hidden" information found using inference mechanisms particular to the system : in this respect, the most important character-Istic of the system lies in the Possibility of using inference procedures to question the database about causal relationships which may exist between the different recorded facts, and which are not explicitly declared at the time of data entry (Zarrl, 1979; . For example, the system may try to explain by inference top-level changes in the State administration in terms of changes in Political power.The biographical information which constitutes the systea's database is organized in the form of units called "planes". There are several different types of plane, sea Zarri e_~ al..L. (1977) ; the "predicative planes", the most important, correspond to a "flash" which illustrates a partlculam mament in the "Ills story" of one or more personagas. A predicative plane is made up of one of flve possible "predicates" (BE-AFFECTED-BY, BERAVE, BE-PRESENT, MOVE, PRODUCE) ; one or more "modulators" may be attached to each predicate. The modulator's function is to specify and delimit the semantic role of the predicate. Each predicate is accompanied by "case slots" which introduce their own arguments ~ dating and space location is also given within a predicative plane, as is the bibliographic authority for the statement. Predicative planes can be linked together in a number of ways ; one way is to use expllcit links of "coordination", "alternative", "causalic/'5 "finality", "condition", etc. The data representation we have chosen in the RESEDA project is basically, therefore, a kind of "case grammar", according to the particular meaning attached to the term in an AI context (Bruce, [975~ Charniak, 1981; etc.) . For example, the data "Andr~ Marchant was named provost of Paris by the King's Council on 22nd September 1413 ;he lost his post on 23rd October 1414, to the benefit of Tanguy du ChAtel, who was granted this office", will be represented in three planes -that of the nomination of Andr~ Merchant, his dismissal and the nomination of Tanguy du Oh&tel.The coding of information must be made on two distinct levels : an "external coding, up until now performed manually by the analyst, gives rise to a first type of representation, formalized according to the categories of the RESEDA metalanguage ; a second automatic stage results in the "internal" numeric code. The external "manual" coding of the three events just stated is given in figure I its associated "case slots". Every predicative plane is characterized by a pair. of "time references" (datel-date2) which give the dtLration of the episode in question.In these three planes, the second date slot (date2) is empty because their modulators (begin, end) specify a change of state associated with a punctual event."Andr~-Marchant" and "Tanguy-du-Ch&tel" are historical personages known to the system ; "provost", "king's-council" and "letters-of-nomination" are terms of RESEDA's lexicon. The classifications associated with the terms of the lexicon provide the major part of the system's socio-historical knowledge of the period. "Paris" is the "location of the object".If the historical sources analyzed gave us the exact causes of these events, we would introduce into the database the corresponding planes and associate them with these three planes by an explicit link of type "CAUSE". This manual procedure for converting information in natural language into one or more planes has at least two major disadvantages which the proposed study intends to deal with :-The manual representation of biographical information in the terms of the metalanguage can only be performed by a specialist. This is done, at the moment, by the researchers themselves who have constructed the prototype system. Such a method is obviously out of the question if the system is to be used routinely by an uninitiated public, especially as RESEDA was conceived as a system supplied continuously with biographical information extracted from many different sources.-In spite of the fact that the syntax of RESEDA's metalanguage imposes strict constraints on the forming of predicative schemata accepted by the system and that these are then thoroughly checked, we cannot completely exclude the possibillty of two coders translating the same information differently.To describe our methodology, I will use the example given in the preceeding section. The initial text in natural language is first (pre) processed to obtain its constituent structure. For this purpose, we have used in a first approach the French surface grammar implemented in DEREDEC, a software package developed at the University of Quebec at Montreal by Pierre Plante (1980a; 1980b) . This system, comparable to an ATN parser, permits a breakdown of the surfaoe text into its syntactic constituents, and establishes, between these constituents, syntagmatic relationships of the type "topic-comment", "determination" and "~oordina~ion". This preliminary analysis provides a context for subsequent processing, without necessarily removin~ all the ambiguities : in the same vein, see Bog~raev and Sparck Jones (1982) .The specific tools that we intend to develop for ~his project are of two types : a general procedure which can be likened to a sort of semantic parsing, and a system of heuristic rules.The first stage of the general procedure consists of marking the "triggers", defined as lexical units which call for one or more of the predicative patterns allowed for in RESEDA's metalanguage. Thus we do not take into consideration every one of the lexical items met in the surface text, retaining only those directly pertaining to the "translation" to be done. However, we do not limit ourselves to a simple keyword approach, since a number of operations utilizing data provided by the morphosyntactic analysis executed by DEREDEC are necessary before the predicative patterns which will be actually used afterwards can be selected.One of the results of the DEREDEC analysis ~s a kind of lemmatization enabling the reduction of surface forms in the text to a canonical form ; for example, infinitive in the case of verbs. The canonical forms found in the text under examination are compared with a list of potential triggers stored permanently in the system.In the case of the sentence we are analyzing we can construct from this list the following sub-list : verbal forms -"name", "loss", "qrant" ; ter~s pertainang directly to the metalanguage or terms which have a direct correspondence with elements of the metalanguage : "office", synonymous with "post" in RESEDA ("post" is a "generic" term, a "head" of a "sub-tree" in RESEDA's lexicon), and its specification "provost". The results of the pre-analysis executed by DEREDEC enable the elimination of potential patterns associated with the triggers "name" and "grant" which would correspond to surface constructions of type "active", as in the hypothetical example "The Duke of Orl~ans named Andr~ Marchant provost of Paris ...". The patterns which will be actually utilized afterwards are therefore those shown in figure 2. Note that in the case of a trigger "name (active form)" the parsonage who figures as surface object would have found as the "SUBJECT" of "BE-AFFECTED-BY",whlIst the surface subject would have been associated with the slot "sOURCE" of "BE-AFFECTED-BY". the papal court (social body)". Therefore, for example, the pattern in figure 3 is also associated with the trigger "name (passive form)" The patterns in this second set will be elimina=ed at the end of the construction procedure since, as xt is not possible to obtain a surface realization of the concept "<soclal-body>" in the position In reality, the predicative structures selected are not limited to those shown in figure I. They are in fact repeated with predicative patterns of the type "BE-AFFECTED-BY" which have as "SUBJECT" "<social-body>",and as "OBJECT" "<personage>" accompanied by the specification ("$PECIF") of a "<post>". These constructions each correspond to the description : "A personage receives a post in a certain organization (the organization in question, SUBJECT, is "augmented", BE-AFFECTED-BY, by the personage, OBJECT, in relation, SPECIF, to a given post)". A corresponding surface expression would be, for example, the following : "Andr~ Marchant (personage) is named secretary (post) of "SUBJECT", they cannot provide complete predicative structures.The last stage of the general procedure consists of examining the triggers belonging to the same morpho-syntactic environments, as defined by the results of the DEREDEC analysis.If there are several triggers pertaining to the same envlronment, and if the predicative patterns triggered are the same -which means that the predicates and case slots must be the same and that the modulators, dates and space location information must be compatible -then it can be said that the triggers refer to the same situation. As a name (passive form) ~ begin÷(soc+)BE-AFFECTED-BY SUBJ <social-body> OBJ <personage>-surface subject of the trigger SPECZF <post>-surface complement (SOURCE <personage>l<social-body>) datel : obligatory date2 : prohibited blblo : obligatory figure 3 result, the predicative patterns are merged as to obtain the most complete description possible ; the predictions about filling the slots linked with the cases of the resulting patterns together govern to search for fillers in the surface expression.Thus, the first two triggers in figure 2, recognized as relevant to the same environment, are combined in the formula in figure 4, which gives the general framework of plane i in figure I.elements "Andr~ Marchant", "provost", "King's Council" and "22nd September 1413" -standardized according to RESEDA's conventions, see figure Iwill take up the slots "SUBJECT", "OBJECT", "SOURCE" and "datel" directly. The filling-in operations are usually much more complicated, and require the use of complex inference rules.I shall say just a few words here about the heuristic rules designed to solve cases of anaphora (as in our example, "he", "this office", "who").begln+(soc*)BE-AFFECTED-BY SUBJ <personage>-surface subject of "was named" OBJ <post>-"provost" (SOURCE <personage>l<social-body>-surface complement of the agent of "was named") datel : obligatory date2 : prohibited bibl. : obligatory figure 4The example we are considering illustrates a particularly simple case, in which it is not necessary to establish links between the planes to be created.If we had to process the sentence "Philibert de St L~ger is nominated seneschal of Lyon on the 30th of July 1412, in lieu of the late A. de Viry", three planes should be generated : one for the nomination of Philibert de St L~ger, one for the death of A. de Viry, and another establishing a weak causality llnk ("CONFER", in our metalanguage) between the first two planes. Surface items such as conjunctions, prepositions and sentential adverbs can be used to infer links between planes : causality, finality, coordination, etc. More precisely, in the last example, "in lieu of" is a potential trigger according to the following rule : if the main noun group of the surface prepositional phrase contains a trigger, this phrase constitutes a plane environment and "CONFER" introduces the plane created.The process I have outlined so far requires a corpus of heuristic rules -organized in the form of "grammars" associated with the predicative patterns of RESEDA's metalanguage -which will enable the slots in these patterns to be filled using the surface information in accordance with the predictions which characterize the slots.In the case of the pattern in figure 4, this fillingin poses no real problems, since the surface In the approach that we propose, marks of anaphora are identified during the general analysis procedure ; the actual solving brings into play a number of criteria from simple pairing off and morphological agreement to more subtle criteria, like contextual proximity, persistence of theme, etc. Thus, morphological agreement and contextual proximity are used to replace "who" by "Tanguy du ChAtel" in our example ; persistence of the theme enables us to fill in the missing date for Tanguy du Chatel's posting with the date "23rd October 1414" appearing in the surface expression.We would like to integrate this approach, which has been purely empirical up to now, into the framework of a more general theory. Two directions of enquiry seem particularly interesting in order to develop our own philosophy of the subject.The PAL system of Candace Sidner (1979; , is a top-down anaphora resolution method which Makes use of the notion of focus (likened to the theme of the discourse). By searching in the text for "focuses" which refer to a system of representation organized as a series of "frames", it is able to solve references.If the reference is not found by using the frames themselves, it is inferred from other frames contained in the database. The interest in this study lies in the fact that RESEDA already has, as permanent data, a certain amount of general knowledge organized in a form very similar to that of frames. Thus, in m 7 example, the nomination and dismissal of Andr6 Merchant refers ~o the context of the "civil war at the beginning of the 15th century" which is one of those frames (Zarri et el., 1977) . The approach used by Klappholz and Lockman (Lockman, 1978) depends on the hypothesis that there is a strong llnk between co-reference and the cohesive links of a discourse.These links, when marked progressively in the text, become indloes of the structure of the discourse, organized as a tree structure and created dynamically.These cohesive links (effect, cause, syllogism, exemplification, etc.) are very similar to the logical connections between planes in RESEDA (causality, finality, condition, etc.).The study that I have described here is Intended to automatically achieve a representation of fundamental underlying semantic relationships corresponding to a French surface expression. I have already pointed out the benefits that we hope to obtain fram this work as far as RESEDA is concerned.I should like ~o add that, on a more general level, solving the problem of automatically recording natural language data would obsiously allow us to face, with a certain amount of confidence, the analogous problems of natuxal language interrogation of RESEDA'S database j the advantages of this, fr~ the point of view of widespread use of the system, are obvious. But the results of this study can, in principle, be used not only in the framework of RESEDA, but in a number of different applications such as, for example, automatic abet.faction, paraphrase, machine translation and the direct coding of natural language documents in a factual database. | null | null | null | null | Main paper:
introduction:
The work that I intend to presen~ here is a preliminary study concerning the automatic translation of French natural language statements into the RESEDA semantic language.The RESEDA project itself is concerned with the creation and practical exploltation of a system for managing a biographical database using Artiflcial Intelligence (AI) techniques. The term "biographical data" must be understood in its widest possible sense : being in fact any event, in the public or private Ills, physical or intellectual, etc., that it is possible to gather about the personages we are interested in.In the present state of the system, this information concerns a well-defined period in time (approximately between 1350 and 1450) and a particular subject area (French history), but we are now working on the adaptation of RESEDA's methodology to the processing of other biographical data, for example medical or legal data. RESEDA differs from "classical" factual database management systems in two ways:-The information is recorded in the base using a particular Data Definition Language (metalanguage) which uses knowledge representation techniques.-A user interrogating the base obtains not only information which has been directly introduced This research is Jointly financsd by the "Agmnce de l'Informatique -A.D.I." (CNRS/ADI contract n ° 507568) and the "Centre National de la Recherche Scientifique -C.N.R.S." (ATP n ° 955045).into it, but also "hidden" information found using inference mechanisms particular to the system : in this respect, the most important character-Istic of the system lies in the Possibility of using inference procedures to question the database about causal relationships which may exist between the different recorded facts, and which are not explicitly declared at the time of data entry (Zarrl, 1979; . For example, the system may try to explain by inference top-level changes in the State administration in terms of changes in Political power.The biographical information which constitutes the systea's database is organized in the form of units called "planes". There are several different types of plane, sea Zarri e_~ al..L. (1977) ; the "predicative planes", the most important, correspond to a "flash" which illustrates a partlculam mament in the "Ills story" of one or more personagas. A predicative plane is made up of one of flve possible "predicates" (BE-AFFECTED-BY, BERAVE, BE-PRESENT, MOVE, PRODUCE) ; one or more "modulators" may be attached to each predicate. The modulator's function is to specify and delimit the semantic role of the predicate. Each predicate is accompanied by "case slots" which introduce their own arguments ~ dating and space location is also given within a predicative plane, as is the bibliographic authority for the statement. Predicative planes can be linked together in a number of ways ; one way is to use expllcit links of "coordination", "alternative", "causalic/'5 "finality", "condition", etc. The data representation we have chosen in the RESEDA project is basically, therefore, a kind of "case grammar", according to the particular meaning attached to the term in an AI context (Bruce, [975~ Charniak, 1981; etc.) . For example, the data "Andr~ Marchant was named provost of Paris by the King's Council on 22nd September 1413 ;he lost his post on 23rd October 1414, to the benefit of Tanguy du ChAtel, who was granted this office", will be represented in three planes -that of the nomination of Andr~ Merchant, his dismissal and the nomination of Tanguy du Oh&tel.The coding of information must be made on two distinct levels : an "external coding, up until now performed manually by the analyst, gives rise to a first type of representation, formalized according to the categories of the RESEDA metalanguage ; a second automatic stage results in the "internal" numeric code. The external "manual" coding of the three events just stated is given in figure I its associated "case slots". Every predicative plane is characterized by a pair. of "time references" (datel-date2) which give the dtLration of the episode in question.In these three planes, the second date slot (date2) is empty because their modulators (begin, end) specify a change of state associated with a punctual event."Andr~-Marchant" and "Tanguy-du-Ch&tel" are historical personages known to the system ; "provost", "king's-council" and "letters-of-nomination" are terms of RESEDA's lexicon. The classifications associated with the terms of the lexicon provide the major part of the system's socio-historical knowledge of the period. "Paris" is the "location of the object".If the historical sources analyzed gave us the exact causes of these events, we would introduce into the database the corresponding planes and associate them with these three planes by an explicit link of type "CAUSE". This manual procedure for converting information in natural language into one or more planes has at least two major disadvantages which the proposed study intends to deal with :-The manual representation of biographical information in the terms of the metalanguage can only be performed by a specialist. This is done, at the moment, by the researchers themselves who have constructed the prototype system. Such a method is obviously out of the question if the system is to be used routinely by an uninitiated public, especially as RESEDA was conceived as a system supplied continuously with biographical information extracted from many different sources.-In spite of the fact that the syntax of RESEDA's metalanguage imposes strict constraints on the forming of predicative schemata accepted by the system and that these are then thoroughly checked, we cannot completely exclude the possibillty of two coders translating the same information differently.To describe our methodology, I will use the example given in the preceeding section. The initial text in natural language is first (pre) processed to obtain its constituent structure. For this purpose, we have used in a first approach the French surface grammar implemented in DEREDEC, a software package developed at the University of Quebec at Montreal by Pierre Plante (1980a; 1980b) . This system, comparable to an ATN parser, permits a breakdown of the surfaoe text into its syntactic constituents, and establishes, between these constituents, syntagmatic relationships of the type "topic-comment", "determination" and "~oordina~ion". This preliminary analysis provides a context for subsequent processing, without necessarily removin~ all the ambiguities : in the same vein, see Bog~raev and Sparck Jones (1982) .The specific tools that we intend to develop for ~his project are of two types : a general procedure which can be likened to a sort of semantic parsing, and a system of heuristic rules.The first stage of the general procedure consists of marking the "triggers", defined as lexical units which call for one or more of the predicative patterns allowed for in RESEDA's metalanguage. Thus we do not take into consideration every one of the lexical items met in the surface text, retaining only those directly pertaining to the "translation" to be done. However, we do not limit ourselves to a simple keyword approach, since a number of operations utilizing data provided by the morphosyntactic analysis executed by DEREDEC are necessary before the predicative patterns which will be actually used afterwards can be selected.One of the results of the DEREDEC analysis ~s a kind of lemmatization enabling the reduction of surface forms in the text to a canonical form ; for example, infinitive in the case of verbs. The canonical forms found in the text under examination are compared with a list of potential triggers stored permanently in the system.In the case of the sentence we are analyzing we can construct from this list the following sub-list : verbal forms -"name", "loss", "qrant" ; ter~s pertainang directly to the metalanguage or terms which have a direct correspondence with elements of the metalanguage : "office", synonymous with "post" in RESEDA ("post" is a "generic" term, a "head" of a "sub-tree" in RESEDA's lexicon), and its specification "provost". The results of the pre-analysis executed by DEREDEC enable the elimination of potential patterns associated with the triggers "name" and "grant" which would correspond to surface constructions of type "active", as in the hypothetical example "The Duke of Orl~ans named Andr~ Marchant provost of Paris ...". The patterns which will be actually utilized afterwards are therefore those shown in figure 2. Note that in the case of a trigger "name (active form)" the parsonage who figures as surface object would have found as the "SUBJECT" of "BE-AFFECTED-BY",whlIst the surface subject would have been associated with the slot "sOURCE" of "BE-AFFECTED-BY". the papal court (social body)". Therefore, for example, the pattern in figure 3 is also associated with the trigger "name (passive form)" The patterns in this second set will be elimina=ed at the end of the construction procedure since, as xt is not possible to obtain a surface realization of the concept "<soclal-body>" in the position In reality, the predicative structures selected are not limited to those shown in figure I. They are in fact repeated with predicative patterns of the type "BE-AFFECTED-BY" which have as "SUBJECT" "<social-body>",and as "OBJECT" "<personage>" accompanied by the specification ("$PECIF") of a "<post>". These constructions each correspond to the description : "A personage receives a post in a certain organization (the organization in question, SUBJECT, is "augmented", BE-AFFECTED-BY, by the personage, OBJECT, in relation, SPECIF, to a given post)". A corresponding surface expression would be, for example, the following : "Andr~ Marchant (personage) is named secretary (post) of "SUBJECT", they cannot provide complete predicative structures.The last stage of the general procedure consists of examining the triggers belonging to the same morpho-syntactic environments, as defined by the results of the DEREDEC analysis.If there are several triggers pertaining to the same envlronment, and if the predicative patterns triggered are the same -which means that the predicates and case slots must be the same and that the modulators, dates and space location information must be compatible -then it can be said that the triggers refer to the same situation. As a name (passive form) ~ begin÷(soc+)BE-AFFECTED-BY SUBJ <social-body> OBJ <personage>-surface subject of the trigger SPECZF <post>-surface complement (SOURCE <personage>l<social-body>) datel : obligatory date2 : prohibited blblo : obligatory figure 3 result, the predicative patterns are merged as to obtain the most complete description possible ; the predictions about filling the slots linked with the cases of the resulting patterns together govern to search for fillers in the surface expression.Thus, the first two triggers in figure 2, recognized as relevant to the same environment, are combined in the formula in figure 4, which gives the general framework of plane i in figure I.elements "Andr~ Marchant", "provost", "King's Council" and "22nd September 1413" -standardized according to RESEDA's conventions, see figure Iwill take up the slots "SUBJECT", "OBJECT", "SOURCE" and "datel" directly. The filling-in operations are usually much more complicated, and require the use of complex inference rules.I shall say just a few words here about the heuristic rules designed to solve cases of anaphora (as in our example, "he", "this office", "who").begln+(soc*)BE-AFFECTED-BY SUBJ <personage>-surface subject of "was named" OBJ <post>-"provost" (SOURCE <personage>l<social-body>-surface complement of the agent of "was named") datel : obligatory date2 : prohibited bibl. : obligatory figure 4The example we are considering illustrates a particularly simple case, in which it is not necessary to establish links between the planes to be created.If we had to process the sentence "Philibert de St L~ger is nominated seneschal of Lyon on the 30th of July 1412, in lieu of the late A. de Viry", three planes should be generated : one for the nomination of Philibert de St L~ger, one for the death of A. de Viry, and another establishing a weak causality llnk ("CONFER", in our metalanguage) between the first two planes. Surface items such as conjunctions, prepositions and sentential adverbs can be used to infer links between planes : causality, finality, coordination, etc. More precisely, in the last example, "in lieu of" is a potential trigger according to the following rule : if the main noun group of the surface prepositional phrase contains a trigger, this phrase constitutes a plane environment and "CONFER" introduces the plane created.The process I have outlined so far requires a corpus of heuristic rules -organized in the form of "grammars" associated with the predicative patterns of RESEDA's metalanguage -which will enable the slots in these patterns to be filled using the surface information in accordance with the predictions which characterize the slots.In the case of the pattern in figure 4, this fillingin poses no real problems, since the surface In the approach that we propose, marks of anaphora are identified during the general analysis procedure ; the actual solving brings into play a number of criteria from simple pairing off and morphological agreement to more subtle criteria, like contextual proximity, persistence of theme, etc. Thus, morphological agreement and contextual proximity are used to replace "who" by "Tanguy du ChAtel" in our example ; persistence of the theme enables us to fill in the missing date for Tanguy du Chatel's posting with the date "23rd October 1414" appearing in the surface expression.We would like to integrate this approach, which has been purely empirical up to now, into the framework of a more general theory. Two directions of enquiry seem particularly interesting in order to develop our own philosophy of the subject.The PAL system of Candace Sidner (1979; , is a top-down anaphora resolution method which Makes use of the notion of focus (likened to the theme of the discourse). By searching in the text for "focuses" which refer to a system of representation organized as a series of "frames", it is able to solve references.If the reference is not found by using the frames themselves, it is inferred from other frames contained in the database. The interest in this study lies in the fact that RESEDA already has, as permanent data, a certain amount of general knowledge organized in a form very similar to that of frames. Thus, in m 7 example, the nomination and dismissal of Andr6 Merchant refers ~o the context of the "civil war at the beginning of the 15th century" which is one of those frames (Zarri et el., 1977) . The approach used by Klappholz and Lockman (Lockman, 1978) depends on the hypothesis that there is a strong llnk between co-reference and the cohesive links of a discourse.These links, when marked progressively in the text, become indloes of the structure of the discourse, organized as a tree structure and created dynamically.These cohesive links (effect, cause, syllogism, exemplification, etc.) are very similar to the logical connections between planes in RESEDA (causality, finality, condition, etc.).The study that I have described here is Intended to automatically achieve a representation of fundamental underlying semantic relationships corresponding to a French surface expression. I have already pointed out the benefits that we hope to obtain fram this work as far as RESEDA is concerned.I should like ~o add that, on a more general level, solving the problem of automatically recording natural language data would obsiously allow us to face, with a certain amount of confidence, the analogous problems of natuxal language interrogation of RESEDA'S database j the advantages of this, fr~ the point of view of widespread use of the system, are obvious. But the results of this study can, in principle, be used not only in the framework of RESEDA, but in a number of different applications such as, for example, automatic abet.faction, paraphrase, machine translation and the direct coding of natural language documents in a factual database.
Appendix:
| null | null | null | null | {
"paperhash": [
"sidner|focusing_for_interpretation_of_pronouns",
"zarri|building_the_inference_component_of_an_historical_information_retrieval_system",
"klappholz|contextual_reference_resolution"
],
"title": [
"Focusing for Interpretation of Pronouns",
"Building the Inference Component of an Historical Information Retrieval System",
"Contextual Reference Resolution"
],
"abstract": [
"Recent studies in both artificial intelligence and linguistics have demonstrated the need for a theory of the comprehension of anaphoric expressions, a theory that accounts for the role of syntactic and semantic effects, as well as inferential knowledge in explaining how anaphors are understood. In this paper a new approach, based on a theory of the process of focusing on parts of the discourse, is used to explain the interpretation of anaphors. The concept of a speaker's foci is defined, and their use is demonstrated in choosing the interpretations of personal pronouns. The rules for choosing interpretations are stated within a framework that shows: how to control search in inferring by a new method called constraint checking; how to take advantage of syntactic, semantic and discourse constraints on interpretation; and how to generalize the treatment of personal pronouns, to serve as a framework for the theory of interpretation for all anaphors.",
"The principal characteristic of the RESEDA system is being able to question an historical data-base about the causal relationships which may exist between different attested facts in the database, but which are not explicitly recorded. In order to do this the designers of RESEDA have created from scratch a methodology for the formalization and generalization of the reasoning used by historians in concrete situations. The description of the present state of this methodology is the object of this paper.",
"With the exception of pranomial reference, little, has been written (in the field of computational linguistics) about the phenomenon of reference i n natural language. This paper investigates the power and use of reference i n natural language. and the problems involved in its resolution. An algorithm is sketched for accomplishing reference resolution using a notion of cross-sentential focus, a mechanism f o r hypothesizing a l l possible contextual references, and a judgment mechanism f o r dis - ~ r i r n i n a t i ng among the hypotheses."
],
"authors": [
{
"name": [
"C. Sidner"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"G. P. Zarri"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"D. Klappholz",
"A. Lockman"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
}
],
"arxiv_id": [
null,
null,
null
],
"s2_corpus_id": [
"16805751",
"18913463",
"219304841"
],
"intents": [
[
"methodology"
],
[],
[]
],
"isInfluential": [
false,
false,
false
]
} | Problem: The paper aims to address the automatic translation of French natural language statements into the RESEDA semantic metalanguage, specifically focusing on the syntactic structure and semantic parsing process involved in this translation.
Solution: The paper proposes a methodology involving the identification of triggers in the natural language text, the selection of predicative patterns allowed in the metalanguage, merging these patterns, and filling their case slots with elements from the surface structure based on predictions associated with the slots. | 504 | 0.047619 | null | null | null | null | null | null | null | null |
458c550ed763d6dc7eac15faf6b84eb670893afa | 33956656 | null | Speech Interfaces: Session Introduction | II CURRENT TECHNOLOGY The speech interface is the natural one for the human user and is beginning to be used in a limited way in many applications. Some of these applications are experimental; still others have achieved the status of cost-effective utility. A brief summary of the current state-of-the-art of speech input and output are presented. The two papers in the session represent specific examples of current work. Some comments on the need for linguistically oriented development conclude the paper. | {
"name": [
"Hogan, Douglas L."
],
"affiliation": [
null
]
} | null | null | First Conference on Applied Natural Language Processing | 1983-02-01 | 1 | 1 | null | Over the past four decades it has often been felt that the solution to the problem of "machine recognition of speech" is ".. just around the corner."When the sound spectrograph was invented (a little less than forty years ago) engineers, acousticians, phoneticists, and linguists were certain that the mysteries of speech were about to be unveiled.When powerful computers could be brought to bear (say -twenty years ago) there was a renewed feeling that such tools would provide the ~eans to a near term solution.When artificial intelligence was the buzzword (a little over ten years ago) it was clear that now the solution of the recognition problem was at hand.Where are we today?A number of modest, and modestly priced, speech recognition systems are on the market and in use. This has come about because technology has permitted some brute force methods to be used and because simple applications have been found to be cost effective.In speech output systems a similar pattern has emerged.Crude synthesizers such as the ~askins pattern playback of thirty years ago were capable of evoking "correct" responses from listeners. Twenty-five years ago it was thought that reading machines for the blind could be constructed by concatenating words.Twenty years ago formant synthesizers sounded extremely natural when their control was a "copy" of a natural utterance.Modern synthesizers are one one-thousandth the size and cost; they still only sound natural when a human utterance is analyzed and then resynthesized as a complete entity.Concatenatin 8 words is still no better, though cheaper, than it was twenty years ago.There are now several speech recognition systems on the market which are intended to recognize isolated words and which have been trained for an individual speaker. The vocabulary sizes are on the order of 100 words are phrases.Accuracy is always quoted at "99+%." These recognizers use a form of template matching within a space which has the dimensions of features versus time.The "true" accuracy is a function of the vocabulary size, the degree of cooperativeness of the speaker, and the innate dissimilarity of the vncab ulary.Since the systems are recognizing known words by known speakers the major source of varia billty in successive words is the time axis. The same word may (and will) be spoken at different speaking rates. Unfortunately, different speaking rates do not result in a linear speed change in all parts of a word; the voiced portions of the word, loosely speaking the vowels, respond more to speed change; the unvoiced portions of the word, loosely the consonants, respond less to speed change.As a result, a nonlinear time adjustment is desired when matching templates. This sort of time adjustment is carried out with a mathematical process known as dynamic programming which permits exploration of all plausible non-linear matches at the expense of (approximately) squaring the compu rational complexity in contrast to the comblna torlal computational growth that would otherwise be required.The medium and high performance speech recognizers usually contain some form of dynamic programming.In some cases more than one level of dynamic programming is used to provide for recognition of short sequences of words.The actual use of these recognizers has developed a number of consequences.Many of them, including the first paper in this session involve the use of speech recognition during hands-andeyes busy operations.These applications will almost always be interactive in nature; the system response may be visual or aural.Prompt response saying what the system "heard" is crucial for improving the speaker's performance.A cooperative speaker clearly adapts to the system. To date, many applications are found where a restricted interactive speech dialog is useful and economical.At this time the speech recognition mechanism is relatively inexpensive; the expensive component is the initial cost of developing the dialog for the appllcaClon and interfacing the recognition element Co the host computer system.At the present tlme recognition is not accomplished in units smaller than the word.It has been hoped chat it might be possible to segment speech into phonemes. These would be recognized, albeit with some errors; the strings of phonemes would then be matched with a lexicon.To date, adequate segmentation for this sort of approach has not been achieved.In fact, in continuous fluent speech good word boundaries are not readily found by any algorithmic means.There are relatively few speech synthesizers in the pure sense of the word.There are many speech output devices which produce speech as the inverse of a previously formed analysis process. The analysis may have been performed by encodln& techniques in the tlme domain; alternatively, it may be the result of soma form of extracting a vocal source or excitation function and a vocal tract descrlptlou. When the analysis is performed on a whole phrase the prosodic features of the indivdual uttering the phrase are preserved; the speech sounds natural.When individual words produced by such an analysls-synthesls process are concatenated the speech does not sound natural.In any event, the process described above does not allow for the open ended case, synthesis of unrestricted text. This process requires that a number of steps be carried out in a satisfactory way.First, orthographic text must be interpreted; e.g. we read "NFL" as a sequence of three words but we pronounce the word "FORTRAN', we automatically expand out the abreviation "St.", etc. Second, the orthography must be converted Co pronunciation, a distinctly non-trlvial task in En~llsh. This is normally accomplished by a set of rules together with a table of exceptions to those rules. Although pronouncing dictionaries do exist in machine form, they are still coo large for random access memory technology, although thls will not be true in the reasonably near future. Proper nouns, especially names of people and places, will often not be amenable to the rules for normal English.Third, the pronunciation of the word must be mapped into sequences drawn from an inventory of smaller units. At various times these units have been allophones, phonemes, dlphones (phoneme pairs), demlsyllables, and syllables.The units are connected with procedures which range from concatenation to smooth interpolation.Finally, it is necessary to develop satisfactory prosody for a whole phrase or sentence. This is normally interpreted as providin& the information about inflection, timing, and stress. This final step is the one in which the greatest difficulty exists at the present time and which presents the strongest bar to natural sounding speech.The second paper in thls session deals wlth the development of stress rules for prosody, one component of =he overall problem.Moat of the current high end work in speech recognition attempts Co c6nstrain the allowable sequence of words by the application of some kind of grammar.This may be a very artificial grammar, for example the interaction wlch an airline reservation system.Other research efforts attempt Co develop models of the language through an information cheoretlc analysis.Coming full circle we find words being analyzed as a Markov process; Merkov, of course, was analyzing language when he developed thls "mathematically defined" procese.Normalizing recognition to the speaker is being approached in two ways.The first, currently being explored at the word reco&nitlon level consists of developing enough samples of each word from many speakers so chat clustering techniques will permit the speaker space to be spanned with a dozen or so examples.The second approach attempts to enroll a speaker in a recognltlon system by speaking "enough" text so tha~ the system is able to develop a model of that person's speech.In research on speech synthesis considerable attention is now being &iven to try, by analysis, to determine rules for prosody.Application of these rules requires grammatical analysis of the text which is to be converted co speech. | null | null | null | As both of the speech interface tasks become more and more open-ended It is clear that satisfactory performance will require very substantial aid from linguistic reseacrh.In the case of recognition this is necessary to reduce the number of hypotheses that must be explored at any given point in a stream of unknown words.In the case of text-to-speech, understandin~ of what iS being said will contribute to producing more natural and acceptable speech.The reference below surveys the current state-of-the art more deeply than can be presented here.It also calls out the need for Increased application of lln&ulstlc information to speech interface development as well as providln~ an extensive set of references for those of you who would llke Co dig deeper. | Main paper:
the future:
As both of the speech interface tasks become more and more open-ended It is clear that satisfactory performance will require very substantial aid from linguistic reseacrh.In the case of recognition this is necessary to reduce the number of hypotheses that must be explored at any given point in a stream of unknown words.In the case of text-to-speech, understandin~ of what iS being said will contribute to producing more natural and acceptable speech.The reference below surveys the current state-of-the art more deeply than can be presented here.It also calls out the need for Increased application of lln&ulstlc information to speech interface development as well as providln~ an extensive set of references for those of you who would llke Co dig deeper.
introduction:
Over the past four decades it has often been felt that the solution to the problem of "machine recognition of speech" is ".. just around the corner."When the sound spectrograph was invented (a little less than forty years ago) engineers, acousticians, phoneticists, and linguists were certain that the mysteries of speech were about to be unveiled.When powerful computers could be brought to bear (say -twenty years ago) there was a renewed feeling that such tools would provide the ~eans to a near term solution.When artificial intelligence was the buzzword (a little over ten years ago) it was clear that now the solution of the recognition problem was at hand.Where are we today?A number of modest, and modestly priced, speech recognition systems are on the market and in use. This has come about because technology has permitted some brute force methods to be used and because simple applications have been found to be cost effective.In speech output systems a similar pattern has emerged.Crude synthesizers such as the ~askins pattern playback of thirty years ago were capable of evoking "correct" responses from listeners. Twenty-five years ago it was thought that reading machines for the blind could be constructed by concatenating words.Twenty years ago formant synthesizers sounded extremely natural when their control was a "copy" of a natural utterance.Modern synthesizers are one one-thousandth the size and cost; they still only sound natural when a human utterance is analyzed and then resynthesized as a complete entity.Concatenatin 8 words is still no better, though cheaper, than it was twenty years ago.There are now several speech recognition systems on the market which are intended to recognize isolated words and which have been trained for an individual speaker. The vocabulary sizes are on the order of 100 words are phrases.Accuracy is always quoted at "99+%." These recognizers use a form of template matching within a space which has the dimensions of features versus time.The "true" accuracy is a function of the vocabulary size, the degree of cooperativeness of the speaker, and the innate dissimilarity of the vncab ulary.Since the systems are recognizing known words by known speakers the major source of varia billty in successive words is the time axis. The same word may (and will) be spoken at different speaking rates. Unfortunately, different speaking rates do not result in a linear speed change in all parts of a word; the voiced portions of the word, loosely speaking the vowels, respond more to speed change; the unvoiced portions of the word, loosely the consonants, respond less to speed change.As a result, a nonlinear time adjustment is desired when matching templates. This sort of time adjustment is carried out with a mathematical process known as dynamic programming which permits exploration of all plausible non-linear matches at the expense of (approximately) squaring the compu rational complexity in contrast to the comblna torlal computational growth that would otherwise be required.The medium and high performance speech recognizers usually contain some form of dynamic programming.In some cases more than one level of dynamic programming is used to provide for recognition of short sequences of words.The actual use of these recognizers has developed a number of consequences.Many of them, including the first paper in this session involve the use of speech recognition during hands-andeyes busy operations.These applications will almost always be interactive in nature; the system response may be visual or aural.Prompt response saying what the system "heard" is crucial for improving the speaker's performance.A cooperative speaker clearly adapts to the system. To date, many applications are found where a restricted interactive speech dialog is useful and economical.At this time the speech recognition mechanism is relatively inexpensive; the expensive component is the initial cost of developing the dialog for the appllcaClon and interfacing the recognition element Co the host computer system.At the present tlme recognition is not accomplished in units smaller than the word.It has been hoped chat it might be possible to segment speech into phonemes. These would be recognized, albeit with some errors; the strings of phonemes would then be matched with a lexicon.To date, adequate segmentation for this sort of approach has not been achieved.In fact, in continuous fluent speech good word boundaries are not readily found by any algorithmic means.There are relatively few speech synthesizers in the pure sense of the word.There are many speech output devices which produce speech as the inverse of a previously formed analysis process. The analysis may have been performed by encodln& techniques in the tlme domain; alternatively, it may be the result of soma form of extracting a vocal source or excitation function and a vocal tract descrlptlou. When the analysis is performed on a whole phrase the prosodic features of the indivdual uttering the phrase are preserved; the speech sounds natural.When individual words produced by such an analysls-synthesls process are concatenated the speech does not sound natural.In any event, the process described above does not allow for the open ended case, synthesis of unrestricted text. This process requires that a number of steps be carried out in a satisfactory way.First, orthographic text must be interpreted; e.g. we read "NFL" as a sequence of three words but we pronounce the word "FORTRAN', we automatically expand out the abreviation "St.", etc. Second, the orthography must be converted Co pronunciation, a distinctly non-trlvial task in En~llsh. This is normally accomplished by a set of rules together with a table of exceptions to those rules. Although pronouncing dictionaries do exist in machine form, they are still coo large for random access memory technology, although thls will not be true in the reasonably near future. Proper nouns, especially names of people and places, will often not be amenable to the rules for normal English.Third, the pronunciation of the word must be mapped into sequences drawn from an inventory of smaller units. At various times these units have been allophones, phonemes, dlphones (phoneme pairs), demlsyllables, and syllables.The units are connected with procedures which range from concatenation to smooth interpolation.Finally, it is necessary to develop satisfactory prosody for a whole phrase or sentence. This is normally interpreted as providin& the information about inflection, timing, and stress. This final step is the one in which the greatest difficulty exists at the present time and which presents the strongest bar to natural sounding speech.The second paper in thls session deals wlth the development of stress rules for prosody, one component of =he overall problem.Moat of the current high end work in speech recognition attempts Co c6nstrain the allowable sequence of words by the application of some kind of grammar.This may be a very artificial grammar, for example the interaction wlch an airline reservation system.Other research efforts attempt Co develop models of the language through an information cheoretlc analysis.Coming full circle we find words being analyzed as a Markov process; Merkov, of course, was analyzing language when he developed thls "mathematically defined" procese.Normalizing recognition to the speaker is being approached in two ways.The first, currently being explored at the word reco&nitlon level consists of developing enough samples of each word from many speakers so chat clustering techniques will permit the speaker space to be spanned with a dozen or so examples.The second approach attempts to enroll a speaker in a recognltlon system by speaking "enough" text so tha~ the system is able to develop a model of that person's speech.In research on speech synthesis considerable attention is now being &iven to try, by analysis, to determine rules for prosody.Application of these rules requires grammatical analysis of the text which is to be converted co speech.
Appendix:
| null | null | null | null | {
"paperhash": [
"flanagen|talking_with_computers:_synthesis_and_recognition_of_speech_by_machines"
],
"title": [
"Talking with Computers: Synthesis and Recognition of Speech by Machines"
],
"abstract": [
"Humans find speech a convenient and efficient means for communicating infonnation. Machines, in contrast, prefer the symbols of assemblers and compilers-exchanged, typically, in printed form through a computer terminal. If computers could be given human-like abilities for voice communication, their value and ease of use for humans would increase. The ubiquitous telephone would take on more of the capabilities of a computer terminal. Making machines talk and listen to humans depends upon economical implementation of speech synthesis and speech recognition. Heretofore the complexities and costs of these functions have deterred wide application. But now, fueled by the advances in integrated electronics, opportunities for expanded and enhanced telephone services are emerging. This paper assesses the progress in synthesis and recognition of speech by computer techniques, and it outlines potential applications in voice-communication services."
],
"authors": [
{
"name": [
"James L. Flanagen"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
}
]
}
],
"arxiv_id": [
null
],
"s2_corpus_id": [
"20708549"
],
"intents": [
[]
],
"isInfluential": [
false
]
} | null | 504 | 0.001984 | null | null | null | null | null | null | null | null |
48b47848edee567201ebb19bb6c1772068c572cf | 1934330 | null | {IR-NLI} : An Expert Natural Language Interface to Online Data Bases | Constructing natural language interfaces to computer systems often requires achievment of advanced reasoning and expert capabilities in addition to basic natural language understanding. In this paper the above issues are faced in the frame of an actual application concerning the design of a natural language interface for the access to online information retrieval systems. After a short discussion of the peculiarities of this application, which requires both natural language understanding and reasoning capabilities, the general architecture and fundamental design criteria of a system presently being developed at the University of Udine are then presented, The system, named IR-NLi, is aimed at allowing non-technical users to directly access through natural language the services offered by online data bases. Attention is later focused on the basic functions of IR-NLI, namely, understanding and dialogue, strategy generation, and reasoning. Knowledge represenetation methods and "~igorithms adopted are also illustar~ed. A short example of interaction with IR-LNT~I is presented. Perspectives and direcZions for future research are also discussed. | {
"name": [
"Guida, Giovanni and",
"Tasso, Carlo"
],
"affiliation": [
null,
null
]
} | null | null | First Conference on Applied Natural Language Processing | 1983-02-01 | 17 | 17 | null | Natural language processing has developed in the last years in several directions that often go far beyond the original goal of mapping natural laua-~uage expressions into formal internal representazions. ?roblems concerned with discourse modeling, reasoning about beliefs, knowledge and wants of speaker and hearer, expiicitely dealing with goals, plans, and speech acts are only a few of the topics ~f current interest in the field. This paper is con-:erned with one ~spect of natural language processing that we name here reasoning, it is intended as a basic • ctivity in natural language comprehension that is aimed at capturing spes/~er's goals and intentions that often lie behind the mere literal meaning of the utterance. In this work we explore the main implications of reasoning in the frame of an acttu&l application which is concerned with the natttral language acees to online information retrieval services (Politt,1981; Waterman,1978) .In particular, we shall present the detailed design of a system, named IR-NLI (Information Retrieval Natural Language Interface), that is being developed at the University of Udine and we shall discuss its main original features. The topic of natural language reasoning is first shortly i!lustrated from a conceptual point of view and compared to related proposals. The main features of the chosen application domain are then described and the specifications of ~R-NLI are stated. We later turn to the architecture of the system and we go fturther into a detailed account of the structure of its knowledge bases and of its mode of operation. Particular attention id devoted to the three fundamental modules STARTEGY GENERATOR, REASONING, ~nd UNDERSATNDING AND DIALOGUE. A sample search session with IR-NLI concludes the illustarion of the project. A critical evaluation of the work is zhen presented, and main lines of the future development ~f the project are outlined with particular ~tention to original research issues.NATURAL LANGUAGE UIi~ERST?S4DING .~ND REASONI:~G Research in natural langage precessin 6 has recome in the last years a highly multiiisciplinzry topic, in which artificial inteliigence, computational linguistic, cognitive science, psycholo~-, ~nz logic share a wide set of common intrests, in tnia frame, reasoning i~ not a new issue. The meaning that we attach to this term in the zontex~ ~f this [~ R±~o with : Milan Polytechnic Artificial Intelligence Project, Milano, Italy. ' ~ ~iso with : CISM, International Center for Mechanical Sciences, Udine, Italy. work is, nevertheless original. We distinguish in the natural language comprehension activity between a surface comprehension that only aims at representing the literal content of a natural language expression into a formal internal representation, and a leap comprehension that moves beyond surface meaning to capture the goals ~nd intentions which lie behind the utterance (Grosz,1979; Hobbs,1979; Allen, ?errault,1980) . The process that brings from surface to deep comprehension is just what we name here reasonin~ activity. Differntly from Winograd (1980) , reasoning is not , in our model, something that takes place after understanding is completed and aims at developing deductions on facts ~nd concepts %cquired. Reasoning is a basic paz~c of deep comprehension and involves not only linguistic capabilities ~understanding and dialogue) but also deduction, induction, analogy, generalization, etc., on common sense and domain specific knowledge. in the application of online information retrieval that we face in this work,the above concepts are :~nsi!ered in the fr%me of man-machine commLLnica -:ion, and reasoning will mostly be concerned with terrsinolo~j, as we shall further explore in the next section. 3NLIN£ INFO2MATION RETRIEVAL in :his section we present an application domain ;here the topic of natural language reasoning plays fundamental role, namely, natural language access to online information retrieval services. As it is well known, online services allow interested users zo solve information problems by selecting and re-~riev~ng reievant io~uments stored in very large bi-bilogr%phic sr f~ctual data bases. 3enerail~" end-users ire unwilling or unable] ~o serach ~ersonai!y and iirectly access these large files, but they Dften rely 3n the ~ssistanee ~f e skilful information professional, the intermediary, who h.ws how tc select e~prtpriate data bases end hr.. to design good search ~%artegies for the retrieval of the desired information, and how ~o impiement them in e suitable formal luery !an~/age. Usually, the interaction between end-~=er %n~ intermediary begins ~ith ~ presearch intervlew aimed e~ precisely clarifying the content and t .e Db.jecti'zes of nhe information need. On tha base zf the information gathered, the inzeremdiary chooses the most suitable data bases and, with the nei~ :f seraching referr~l aids such ~s thesauri, iirecncties, etc., he devises the search ~trate~# no 0e executed by the information retrieval system. The output of the search is then evaluated by the enduser, who may propose K refinemen~ ~nd an interaction of the search for better matching hi3 requests.We claim that the intermediary's task represents a good example of the issues of natur~i ian~aage reasoning, part~icularly for what concerns the ebliity ~f understanding natural language user'3 :'equest~ an! ;:' reasoning on their linguistic and aemanzlz z'..nn:e~ in order :o fully :~pture user's nears ~nd gczl~. Besides, it has to be stressed that ~he intermedizrv should also posses other important skills, nh..: i_ expertise ~nd precise knowledge ~bout ia~a ba~e cantent, organization, and indexing criteria, abcu~ availability and use of searching referral ai!s,abBut system query languages and ~coess procedures, znd last about how %o plot ~nd construct en adequate search strategy. The above illustrated :hrzcteriszlcs motivate the design of a natural Language expert ~ystem for interfacing online ~ata bases. 3n fact, the !R-NLI project has among its long term goals the i~ plementation of a system to be interposed between the end-user and the information retrieval system, capa ble of fully substituting the intermediary's role.IR-NLI is conceived as an interactive interface to online information retrieval system suppoz~ing English language interaction. It should be able to manage a dialogue with the user on his information needs and to construct an appropiate search strategy. More precisely, IR-NLI is aimed at meeting the needs of non-technical users who are not acqua/nted with on line searching. For this purpose three different capabilities are requested. First, the system has to be an expert of online searching,i.e, it must embed knowledge of the intermediary's professional skill. Second, it must be able of understanding natural language and of carrying on the dialogue with the user. Third, it has to be capable of reasoning on language in order to capture the information needs of the user and to formulate them with appropriate terms in a given formal query language.In the current first phase of the project we have considered a set of working hypotheses for IR-NLI :it operates on just one data base;-it utilizes only one query language; -it refers to only one subject domain; -it is conceived only for off-line use without interaction with the data base during the search session.fire suitable sequences of understanding, dialogue, and reasoning functions until the internal repre.sentation of the user's requests is completely expanded and validated.The UNDERSTANDING AND DIALOGUE module is devoted to perform activities mostly of linguistic concern. First, it has to translate the natural language user's requests into a basic formal internal representation (IR). Second, it manages the dialogue with the user by generating appropriate queries and by translating his replays,thus expanding the IR with new information. The UNDERSTANDING AND DIAILGUE ~odule utilizes for its operation a base of lin~uiszic kno.led~e (LK).The REASONING module is aimed at reasoning on IR in order to enlarge its content with all the information required to generate an appropiate search strate~[. It utilizes for this task a base of domain specific knowledge (DSK).The FORMALIZER module, after the STRATEGY GEf~-RATOR has completed its activity, constructs from the IR the output search stra~e~ to be executed for accessing the online data base. The FORMALIZER utili zes for its operation knowledge about the formal fan guage needed to interrogate the online data base and operates through a simple syntax-directed schema, it is conceived as a parametric translator capable of producing search strategies in several languages for accessing online services, such as SDC ORBIT, Euronet DIANE, Lockheed DIALOG, etc."!The general architecture designed for the IR-NLI system is shown in Figure 2 . The kernel of the system is constituted by the 3~dEI'ZGY GENERATOR , which is devoted to devise the top-level choices conzerning the overall operation of the syszem and to :cntroi their execution. It utilizes for it~ acti-"'ity a base 3f expert knowledge (E~K) which concerns the evaluation of user's requests, the managament 3f the presearch interview, the selection of a suitable approach for ~eneration of the search strategy, and ~uheduling of the activities of the lower Level modules :~ERSTAINDING AND DIALOGUE, REASONING, and FORMALIZER. The operation of the STRATEGY GENERA-TOR is organized around a basic sequence of steps, each taking into ~ccount a differnt subset of expert rules, that r%ppiy tO different situations and In this section we shall illustrate the main features of the three knowledge bases utilized by the IE-NLI system.Let us begin with DSK. The purpose of this kno K ledge base is to store information about the domain covered by the online data base to which IR-NLI refers. This information presents two ~spects : a semantic facet concerning what concepts are in the data base and how they relate to each other, and a linguistic one concerning how the concepts are currently eta'pressed through appropriate termS. The structure of DSK proposed reflects and generalizes to ~ome extent that of classical searching referral aids (in particular, thesauri and subject classifications). At a logical level, it is constituted by a labelled directed network in which nodes represent concepts and directed arcs represent relations between concepts. Each node contains a ~erm., a fla~ denoting whether the term is controlled or not, a field that stores the post in~ count, i.e. the number of items in the data base in which the term appear, and a level number which represents the degree of specificity of ~he term in a hirar-:hical subject classification. Arcs g'n~erai!y denote the usual cross-reference relationships utilized for struc=uring thesauri; e.g., BT (broader term), (narrower term), RT (related term), UF (used for). In addition, arcs of type ne~ are provided that al-Low, in connection with the level numbers of nodes, sequential scanning of the knowledge base accoriing to the currently utilized hierarchical subject ~iassification. This s~ructure is conceived to be ilrec~ly obtained (possibly in a partially automatic way ~hrough appropriate data conversion programs) from ~vailable searching referral aids and online thesauri.Le~ us turn now to LK. This knowledge base is ~imed at supplying all information concerning natural language ~ha~ is needed to understand user's reluests. According to the mode of operation of =he 'XDERSTANDING A~D DIALOGUE module (see section IX), it 2on%ains the lexicon of the application domain which is currently considered. Each record of the lexicon 2ontains ~ word of =he language, its sem:~n-t~2 ~2~.~e concept, ~onnec~ive, f'anction~, ~nd its ~e%nin~. The semantic type denotes ~he role ~f % ~ord in a sentence; namely:-ienoting ~ term of zhe da=a base; -iefining z parz icuiar relation between different 2.]nc~ta in user's requests; -specifying ~ particular function that the user ie-~ires ~o obtain from the information re.ri_,al~ =z ~ystem.The meaning 9f R word may be expressed zs ~ pointer to a term of the DSK in the case of a word of type concept, as a special purpose procedure in ~he case of a connective or a f~nc:ion.Let us note that,in order to avoid ~nuseful duplication of informa=ion in =he DSK and LK, a shared directory of en~r y words may be u~iiized f~r boLh bases.The purpose of EK is to contain information tha= concerns the professional expertise of the intermediary on how to manage a search session in 0rder to appropriately satisfy the information needs of the end-user. Its contort= is made up sf several classes of rules concerning the different kinds 3f activities performed during a search session. 3he general s~ructure of the rules is of the ciassical type !F-THEN.The task of the STRATEGY JENEEATCR can be considered from two differen~ poinzs of view :-an external one, tha= concerns performing in~er-media~j's activity;an in=ernal one, that rela=es to management and control of REASONING and U~DERSTANDiI;G ~ 31ALC-GUE modules.On ~he base of these specifications, it mus~ embed exper~c capabilities and behave Rs a consultation system for information retrieval [?oli:t,~9@: ..... basic mode of operation of this =odtule is ~rganized around the following four main steps tha~ reflect the usual practice of online information searching (Lancaster,~979; Meadow, Cochrane,~Od~. ~. perform presearch interview 2. select approach 3. devise search startegy h. construct search s~ar~egy.The IN adopted is unique throughou: the whole operation of the system and it is :onszitu:e/ by z frame, initialized by the UNDERSTAS~ING ~ :IALSOUE modu_le,and then further refined and expanded cy the reasoning module. This fr~-me i~ &ErucEured in%z ~ucframes in such % way no :ontain, :!zssi:'~ei un!er different headings, any information ~ha% is relevant for searching an online data base, and ~3 zii;w an effective pattern-matching for the seleczizn cf search approaches and tactics. More ~ecifizziiy, l? encompasses terminology about zoncepts and facetz present in user's requests, c~-ifi=~tizns about search constraints and output forma~, ~nd fi~lres about search objectives such &s recall and ~recisi~n ~Meadow, Cochr~ne,1981~.To go further in our description, let us introduce precise definitions of two technical terms above used in an informal way : search approach : the abstract way of facing a search problem, reasoning on it, analyzing its facets, and devising a general mode of opera~ion for having access to desired informazion stored in an online data base; search tactic : a move, a single step or action, in the execution of a search approach.Let us recall that a search strafe@D, is a program, written in an appropriate formal query lan~aa~e, for obtaining desired information from an online system; taking into account the two above definitions a search strate~j can be viewed as the result ~£ the execution of a search approach through application of appropriate ~earch tactics.Within IR-NLI, a search approach is represented as an al~orithm that defines which tactics to utilize, among the available ones, and how to use them in the construction of a s~rate~. An approach is not however a fixed procedure, since it does not ~pecify at each step which paz~cicular tactic to execute, but only suggests a set of candidate tactics, whose execution may or may not be fired. The operation of the STRATEGY GENERATOR is basically pattern-directed; namely, the particular activities to be performed and the way in which UN-DERSTANDING ~D DIALOGUE and REASONING modules are activated are determined by the content of the current IR *or of some par~cs of i~), which is matched with zn appropriate subset of the exper~ rules. In :his way !~3 mode of ~peration is not strictly determinate : ~ome %ctivities may or may not be fired .r may be perfDrmed in !ifferent ways according ~o the results 3f :he pattern-matching algorit~hm. As already mentioned in section IV, in the first version of IR-NLI the off-line operation :f the system lead us go consider only the buiidin~ block approach; future versions of the system viii encompass also other classical and ~ommoniy u~ilized approaches such as successive fraction, zita -:ion pearl growing, most speclfic facet flrsn, ~nc. {Meadow, Cochr~e,1981), ~hac are more ~uizab!e fJr an on-line operation of the ~ystem in which iicezt interaction with the data base luring the ae~rcn session is allowed.The .ctivity 3f the STRATEGY GENERATOR can now he repr:sented in % more ~etaiied way through ~he fsllowing high-lave! program :The REASONING module operates on the IR ani za Among the basic capabilitis of the ~ESONING module we consider generalization to broader terms, extension to related concepts, particularization to narrower terms, analysis of synonymi and homonymi, etc. its operation is based on special-purpose procedures that correspond to the reasoning actions involved in the tactics. Furthermore, when an action has to be performed on IR for extending its content, validation may be requested from the user in order to ensure a correct matzhing betvwen his wants and system proposals. This is done through the U~DERSTANDING AND DIALOGUE molule which has to gather user's agreement about the new terms to be introduced in the IR.U~DERSTA/~DING .~D DIALOGUEZdr.e purpose of the UNDERSTANDING AND DIALOGUE module is twofold : basic internal ~epresentation~, an~ manages a 7%ttprn-iirected invocation of heuristic rules for :-eso~uti~n 9f critical ~vent~ 'e.g., ambi~ai~ies, 9ilizpes , %naphorlc references, indirect ~9eech, :[2..~ important feature 0f the understanding fun-~%lon iz the ability to solve critical situations by engaging the user in a clarification iizlDgue ±ccivated by some of the above mentioned heuristic rules, to gather additional information which is zecessar}" to correctly ~inderstand the input natu-r%i [~nguage requests.For what concerns the dialogue function, it relies on two strictly connected activities :-generation of a lue~j, according to some requests from the STRATEGY GENERATOR ~r HEASONIIIG modules, through assembly and completing of parametric =e:~ fragments stored in the UNDERST.~ND!NG '~D DIALOGb~ module% -understanding of the user's answer and refinement, i.e. validation, updating or completing, of the current IR.Let as s~ress that, according to the basic goaloriented conception of the parsing mechanism of "~TDERSTANDING AND DIALOGUE module, the ur.ders~uding activity performed in the frame of :he diaiogue function is strongly directed by knowledge of the query tha~ the system has asked the user ~nd, therefore, of expected information to be zap~ured in the answer.In this section we present % short example of the basic mode of operation of IE-ZLI. Figure shows a sample ~ession in which, in ~ddition to the user-system dialog/e, parts of the .'R and the search strategy generated (in Euronet DI.L\'~ EL~O-LANGUAGE) ~re reported. The -~xample refers to the domain of computer science.CONCLUSIOn;In the paper the main features of =he ZR-~;LI system have been presented. The projec~ is now entering the experimental phase =hat will be carried on on a VAX 11,'7~0 syszem.The design activity so far !evelc~ed ,3uiia, Tasso,1982b Tasso, ,1982c has reached, 2n cur mind, a quite assessed point so that fuzure work :n -his norA-" will oe mainly concerned wi-h r~mo':~i -f -he restric =ions and working hypotheses -'onsiaered in the current first ~nase and with refinemenr. ::' [z~/emen=~-%ion de=aLia. ~he authors also g~±n %: [aziememt _n the next future a-'omple~.e prototype versicr. ~f =::z system to be -'onnected to a real ;nlane S~:s=e~ in "-he fro-me .~f a strictly application ~rle..ted interest .The research activity will be focused, .'n -he ocher hand, on several issues c': investigation. Among these we mention ::z= [i,~-r'--~ f--the development of more flexible and robust dialogue capabilities, including limited justification of the mode of operation of ~he system (Webbet,1982) ; -the study of advanced representations of ~actics through generalized rule structures that will allow more refined matching and firing mechanisms (Winston,198E);the design of new tactics (e.g., PATTERN, ~ECORD, BIBBLE (Bates,~979)) ~nd reasoning actions, that enable the system to keep track of previous search sessions ~nd to ~nalogize from experience in devising and executing a search approach. | null | null | null | null | Main paper:
introduction:
Natural language processing has developed in the last years in several directions that often go far beyond the original goal of mapping natural laua-~uage expressions into formal internal representazions. ?roblems concerned with discourse modeling, reasoning about beliefs, knowledge and wants of speaker and hearer, expiicitely dealing with goals, plans, and speech acts are only a few of the topics ~f current interest in the field. This paper is con-:erned with one ~spect of natural language processing that we name here reasoning, it is intended as a basic • ctivity in natural language comprehension that is aimed at capturing spes/~er's goals and intentions that often lie behind the mere literal meaning of the utterance. In this work we explore the main implications of reasoning in the frame of an acttu&l application which is concerned with the natttral language acees to online information retrieval services (Politt,1981; Waterman,1978) .In particular, we shall present the detailed design of a system, named IR-NLI (Information Retrieval Natural Language Interface), that is being developed at the University of Udine and we shall discuss its main original features. The topic of natural language reasoning is first shortly i!lustrated from a conceptual point of view and compared to related proposals. The main features of the chosen application domain are then described and the specifications of ~R-NLI are stated. We later turn to the architecture of the system and we go fturther into a detailed account of the structure of its knowledge bases and of its mode of operation. Particular attention id devoted to the three fundamental modules STARTEGY GENERATOR, REASONING, ~nd UNDERSATNDING AND DIALOGUE. A sample search session with IR-NLI concludes the illustarion of the project. A critical evaluation of the work is zhen presented, and main lines of the future development ~f the project are outlined with particular ~tention to original research issues.NATURAL LANGUAGE UIi~ERST?S4DING .~ND REASONI:~G Research in natural langage precessin 6 has recome in the last years a highly multiiisciplinzry topic, in which artificial inteliigence, computational linguistic, cognitive science, psycholo~-, ~nz logic share a wide set of common intrests, in tnia frame, reasoning i~ not a new issue. The meaning that we attach to this term in the zontex~ ~f this [~ R±~o with : Milan Polytechnic Artificial Intelligence Project, Milano, Italy. ' ~ ~iso with : CISM, International Center for Mechanical Sciences, Udine, Italy. work is, nevertheless original. We distinguish in the natural language comprehension activity between a surface comprehension that only aims at representing the literal content of a natural language expression into a formal internal representation, and a leap comprehension that moves beyond surface meaning to capture the goals ~nd intentions which lie behind the utterance (Grosz,1979; Hobbs,1979; Allen, ?errault,1980) . The process that brings from surface to deep comprehension is just what we name here reasonin~ activity. Differntly from Winograd (1980) , reasoning is not , in our model, something that takes place after understanding is completed and aims at developing deductions on facts ~nd concepts %cquired. Reasoning is a basic paz~c of deep comprehension and involves not only linguistic capabilities ~understanding and dialogue) but also deduction, induction, analogy, generalization, etc., on common sense and domain specific knowledge. in the application of online information retrieval that we face in this work,the above concepts are :~nsi!ered in the fr%me of man-machine commLLnica -:ion, and reasoning will mostly be concerned with terrsinolo~j, as we shall further explore in the next section. 3NLIN£ INFO2MATION RETRIEVAL in :his section we present an application domain ;here the topic of natural language reasoning plays fundamental role, namely, natural language access to online information retrieval services. As it is well known, online services allow interested users zo solve information problems by selecting and re-~riev~ng reievant io~uments stored in very large bi-bilogr%phic sr f~ctual data bases. 3enerail~" end-users ire unwilling or unable] ~o serach ~ersonai!y and iirectly access these large files, but they Dften rely 3n the ~ssistanee ~f e skilful information professional, the intermediary, who h.ws how tc select e~prtpriate data bases end hr.. to design good search ~%artegies for the retrieval of the desired information, and how ~o impiement them in e suitable formal luery !an~/age. Usually, the interaction between end-~=er %n~ intermediary begins ~ith ~ presearch intervlew aimed e~ precisely clarifying the content and t .e Db.jecti'zes of nhe information need. On tha base zf the information gathered, the inzeremdiary chooses the most suitable data bases and, with the nei~ :f seraching referr~l aids such ~s thesauri, iirecncties, etc., he devises the search ~trate~# no 0e executed by the information retrieval system. The output of the search is then evaluated by the enduser, who may propose K refinemen~ ~nd an interaction of the search for better matching hi3 requests.We claim that the intermediary's task represents a good example of the issues of natur~i ian~aage reasoning, part~icularly for what concerns the ebliity ~f understanding natural language user'3 :'equest~ an! ;:' reasoning on their linguistic and aemanzlz z'..nn:e~ in order :o fully :~pture user's nears ~nd gczl~. Besides, it has to be stressed that ~he intermedizrv should also posses other important skills, nh..: i_ expertise ~nd precise knowledge ~bout ia~a ba~e cantent, organization, and indexing criteria, abcu~ availability and use of searching referral ai!s,abBut system query languages and ~coess procedures, znd last about how %o plot ~nd construct en adequate search strategy. The above illustrated :hrzcteriszlcs motivate the design of a natural Language expert ~ystem for interfacing online ~ata bases. 3n fact, the !R-NLI project has among its long term goals the i~ plementation of a system to be interposed between the end-user and the information retrieval system, capa ble of fully substituting the intermediary's role.IR-NLI is conceived as an interactive interface to online information retrieval system suppoz~ing English language interaction. It should be able to manage a dialogue with the user on his information needs and to construct an appropiate search strategy. More precisely, IR-NLI is aimed at meeting the needs of non-technical users who are not acqua/nted with on line searching. For this purpose three different capabilities are requested. First, the system has to be an expert of online searching,i.e, it must embed knowledge of the intermediary's professional skill. Second, it must be able of understanding natural language and of carrying on the dialogue with the user. Third, it has to be capable of reasoning on language in order to capture the information needs of the user and to formulate them with appropriate terms in a given formal query language.In the current first phase of the project we have considered a set of working hypotheses for IR-NLI :it operates on just one data base;-it utilizes only one query language; -it refers to only one subject domain; -it is conceived only for off-line use without interaction with the data base during the search session.fire suitable sequences of understanding, dialogue, and reasoning functions until the internal repre.sentation of the user's requests is completely expanded and validated.The UNDERSTANDING AND DIALOGUE module is devoted to perform activities mostly of linguistic concern. First, it has to translate the natural language user's requests into a basic formal internal representation (IR). Second, it manages the dialogue with the user by generating appropriate queries and by translating his replays,thus expanding the IR with new information. The UNDERSTANDING AND DIAILGUE ~odule utilizes for its operation a base of lin~uiszic kno.led~e (LK).The REASONING module is aimed at reasoning on IR in order to enlarge its content with all the information required to generate an appropiate search strate~[. It utilizes for this task a base of domain specific knowledge (DSK).The FORMALIZER module, after the STRATEGY GEf~-RATOR has completed its activity, constructs from the IR the output search stra~e~ to be executed for accessing the online data base. The FORMALIZER utili zes for its operation knowledge about the formal fan guage needed to interrogate the online data base and operates through a simple syntax-directed schema, it is conceived as a parametric translator capable of producing search strategies in several languages for accessing online services, such as SDC ORBIT, Euronet DIANE, Lockheed DIALOG, etc."!The general architecture designed for the IR-NLI system is shown in Figure 2 . The kernel of the system is constituted by the 3~dEI'ZGY GENERATOR , which is devoted to devise the top-level choices conzerning the overall operation of the syszem and to :cntroi their execution. It utilizes for it~ acti-"'ity a base 3f expert knowledge (E~K) which concerns the evaluation of user's requests, the managament 3f the presearch interview, the selection of a suitable approach for ~eneration of the search strategy, and ~uheduling of the activities of the lower Level modules :~ERSTAINDING AND DIALOGUE, REASONING, and FORMALIZER. The operation of the STRATEGY GENERA-TOR is organized around a basic sequence of steps, each taking into ~ccount a differnt subset of expert rules, that r%ppiy tO different situations and In this section we shall illustrate the main features of the three knowledge bases utilized by the IE-NLI system.Let us begin with DSK. The purpose of this kno K ledge base is to store information about the domain covered by the online data base to which IR-NLI refers. This information presents two ~spects : a semantic facet concerning what concepts are in the data base and how they relate to each other, and a linguistic one concerning how the concepts are currently eta'pressed through appropriate termS. The structure of DSK proposed reflects and generalizes to ~ome extent that of classical searching referral aids (in particular, thesauri and subject classifications). At a logical level, it is constituted by a labelled directed network in which nodes represent concepts and directed arcs represent relations between concepts. Each node contains a ~erm., a fla~ denoting whether the term is controlled or not, a field that stores the post in~ count, i.e. the number of items in the data base in which the term appear, and a level number which represents the degree of specificity of ~he term in a hirar-:hical subject classification. Arcs g'n~erai!y denote the usual cross-reference relationships utilized for struc=uring thesauri; e.g., BT (broader term), (narrower term), RT (related term), UF (used for). In addition, arcs of type ne~ are provided that al-Low, in connection with the level numbers of nodes, sequential scanning of the knowledge base accoriing to the currently utilized hierarchical subject ~iassification. This s~ructure is conceived to be ilrec~ly obtained (possibly in a partially automatic way ~hrough appropriate data conversion programs) from ~vailable searching referral aids and online thesauri.Le~ us turn now to LK. This knowledge base is ~imed at supplying all information concerning natural language ~ha~ is needed to understand user's reluests. According to the mode of operation of =he 'XDERSTANDING A~D DIALOGUE module (see section IX), it 2on%ains the lexicon of the application domain which is currently considered. Each record of the lexicon 2ontains ~ word of =he language, its sem:~n-t~2 ~2~.~e concept, ~onnec~ive, f'anction~, ~nd its ~e%nin~. The semantic type denotes ~he role ~f % ~ord in a sentence; namely:-ienoting ~ term of zhe da=a base; -iefining z parz icuiar relation between different 2.]nc~ta in user's requests; -specifying ~ particular function that the user ie-~ires ~o obtain from the information re.ri_,al~ =z ~ystem.The meaning 9f R word may be expressed zs ~ pointer to a term of the DSK in the case of a word of type concept, as a special purpose procedure in ~he case of a connective or a f~nc:ion.Let us note that,in order to avoid ~nuseful duplication of informa=ion in =he DSK and LK, a shared directory of en~r y words may be u~iiized f~r boLh bases.The purpose of EK is to contain information tha= concerns the professional expertise of the intermediary on how to manage a search session in 0rder to appropriately satisfy the information needs of the end-user. Its contort= is made up sf several classes of rules concerning the different kinds 3f activities performed during a search session. 3he general s~ructure of the rules is of the ciassical type !F-THEN.The task of the STRATEGY JENEEATCR can be considered from two differen~ poinzs of view :-an external one, tha= concerns performing in~er-media~j's activity;an in=ernal one, that rela=es to management and control of REASONING and U~DERSTANDiI;G ~ 31ALC-GUE modules.On ~he base of these specifications, it mus~ embed exper~c capabilities and behave Rs a consultation system for information retrieval [?oli:t,~9@: ..... basic mode of operation of this =odtule is ~rganized around the following four main steps tha~ reflect the usual practice of online information searching (Lancaster,~979; Meadow, Cochrane,~Od~. ~. perform presearch interview 2. select approach 3. devise search startegy h. construct search s~ar~egy.The IN adopted is unique throughou: the whole operation of the system and it is :onszitu:e/ by z frame, initialized by the UNDERSTAS~ING ~ :IALSOUE modu_le,and then further refined and expanded cy the reasoning module. This fr~-me i~ &ErucEured in%z ~ucframes in such % way no :ontain, :!zssi:'~ei un!er different headings, any information ~ha% is relevant for searching an online data base, and ~3 zii;w an effective pattern-matching for the seleczizn cf search approaches and tactics. More ~ecifizziiy, l? encompasses terminology about zoncepts and facetz present in user's requests, c~-ifi=~tizns about search constraints and output forma~, ~nd fi~lres about search objectives such &s recall and ~recisi~n ~Meadow, Cochr~ne,1981~.To go further in our description, let us introduce precise definitions of two technical terms above used in an informal way : search approach : the abstract way of facing a search problem, reasoning on it, analyzing its facets, and devising a general mode of opera~ion for having access to desired informazion stored in an online data base; search tactic : a move, a single step or action, in the execution of a search approach.Let us recall that a search strafe@D, is a program, written in an appropriate formal query lan~aa~e, for obtaining desired information from an online system; taking into account the two above definitions a search strate~j can be viewed as the result ~£ the execution of a search approach through application of appropriate ~earch tactics.Within IR-NLI, a search approach is represented as an al~orithm that defines which tactics to utilize, among the available ones, and how to use them in the construction of a s~rate~. An approach is not however a fixed procedure, since it does not ~pecify at each step which paz~cicular tactic to execute, but only suggests a set of candidate tactics, whose execution may or may not be fired. The operation of the STRATEGY GENERATOR is basically pattern-directed; namely, the particular activities to be performed and the way in which UN-DERSTANDING ~D DIALOGUE and REASONING modules are activated are determined by the content of the current IR *or of some par~cs of i~), which is matched with zn appropriate subset of the exper~ rules. In :his way !~3 mode of ~peration is not strictly determinate : ~ome %ctivities may or may not be fired .r may be perfDrmed in !ifferent ways according ~o the results 3f :he pattern-matching algorit~hm. As already mentioned in section IV, in the first version of IR-NLI the off-line operation :f the system lead us go consider only the buiidin~ block approach; future versions of the system viii encompass also other classical and ~ommoniy u~ilized approaches such as successive fraction, zita -:ion pearl growing, most speclfic facet flrsn, ~nc. {Meadow, Cochr~e,1981), ~hac are more ~uizab!e fJr an on-line operation of the ~ystem in which iicezt interaction with the data base luring the ae~rcn session is allowed.The .ctivity 3f the STRATEGY GENERATOR can now he repr:sented in % more ~etaiied way through ~he fsllowing high-lave! program :The REASONING module operates on the IR ani za Among the basic capabilitis of the ~ESONING module we consider generalization to broader terms, extension to related concepts, particularization to narrower terms, analysis of synonymi and homonymi, etc. its operation is based on special-purpose procedures that correspond to the reasoning actions involved in the tactics. Furthermore, when an action has to be performed on IR for extending its content, validation may be requested from the user in order to ensure a correct matzhing betvwen his wants and system proposals. This is done through the U~DERSTANDING AND DIALOGUE molule which has to gather user's agreement about the new terms to be introduced in the IR.U~DERSTA/~DING .~D DIALOGUEZdr.e purpose of the UNDERSTANDING AND DIALOGUE module is twofold : basic internal ~epresentation~, an~ manages a 7%ttprn-iirected invocation of heuristic rules for :-eso~uti~n 9f critical ~vent~ 'e.g., ambi~ai~ies, 9ilizpes , %naphorlc references, indirect ~9eech, :[2..~ important feature 0f the understanding fun-~%lon iz the ability to solve critical situations by engaging the user in a clarification iizlDgue ±ccivated by some of the above mentioned heuristic rules, to gather additional information which is zecessar}" to correctly ~inderstand the input natu-r%i [~nguage requests.For what concerns the dialogue function, it relies on two strictly connected activities :-generation of a lue~j, according to some requests from the STRATEGY GENERATOR ~r HEASONIIIG modules, through assembly and completing of parametric =e:~ fragments stored in the UNDERST.~ND!NG '~D DIALOGb~ module% -understanding of the user's answer and refinement, i.e. validation, updating or completing, of the current IR.Let as s~ress that, according to the basic goaloriented conception of the parsing mechanism of "~TDERSTANDING AND DIALOGUE module, the ur.ders~uding activity performed in the frame of :he diaiogue function is strongly directed by knowledge of the query tha~ the system has asked the user ~nd, therefore, of expected information to be zap~ured in the answer.In this section we present % short example of the basic mode of operation of IE-ZLI. Figure shows a sample ~ession in which, in ~ddition to the user-system dialog/e, parts of the .'R and the search strategy generated (in Euronet DI.L\'~ EL~O-LANGUAGE) ~re reported. The -~xample refers to the domain of computer science.CONCLUSIOn;In the paper the main features of =he ZR-~;LI system have been presented. The projec~ is now entering the experimental phase =hat will be carried on on a VAX 11,'7~0 syszem.The design activity so far !evelc~ed ,3uiia, Tasso,1982b Tasso, ,1982c has reached, 2n cur mind, a quite assessed point so that fuzure work :n -his norA-" will oe mainly concerned wi-h r~mo':~i -f -he restric =ions and working hypotheses -'onsiaered in the current first ~nase and with refinemenr. ::' [z~/emen=~-%ion de=aLia. ~he authors also g~±n %: [aziememt _n the next future a-'omple~.e prototype versicr. ~f =::z system to be -'onnected to a real ;nlane S~:s=e~ in "-he fro-me .~f a strictly application ~rle..ted interest .The research activity will be focused, .'n -he ocher hand, on several issues c': investigation. Among these we mention ::z= [i,~-r'--~ f--the development of more flexible and robust dialogue capabilities, including limited justification of the mode of operation of ~he system (Webbet,1982) ; -the study of advanced representations of ~actics through generalized rule structures that will allow more refined matching and firing mechanisms (Winston,198E);the design of new tactics (e.g., PATTERN, ~ECORD, BIBBLE (Bates,~979)) ~nd reasoning actions, that enable the system to keep track of previous search sessions ~nd to ~nalogize from experience in devising and executing a search approach.
Appendix:
| null | null | null | null | {
"paperhash": [
"webber|taking_the_initiative_in_natural_language_data_base_interactions:_justifying_why",
"winston|learning_by_augmenting_rules_and_accumulating_censors.",
"winograd|what_does_it_mean_to_understand_language?",
"bates|information_search_tactics",
"meadow|basics_of_online_searching"
],
"title": [
"Taking the Initiative in Natural Language Data Base Interactions: Justifying Why",
"Learning by Augmenting Rules and Accumulating Censors.",
"What Does it Mean to Understand Language?",
"Information search tactics",
"Basics of online searching"
],
"abstract": [
"In answering a factual database query, one often has the option of providing more than just the answer explicitly requested. As part of our research on Natural Language interactions with databases~ we have been looking at three ways in which the system could so \"take the initiative\" in constructing a response: (i) pointing out incorrect presuppositions reflected in the user's query [4,5]; (2) offering to \"monitor\" for the requested information or additional relevant information as the system learns of it [6,7]; and (3) providing grounds for the system's response i.e., \"justifying why\". The following responses illustrate \"presupposition correctlon\"~ \"monitor offers\" and \"justification\", respectively. This paper describes our research on producing justifications. (\"U\" refers to the user, \"S\" to the system.)",
"Abstract : This paper is a synthesis of several sets of ideas: ideas about learning from precedents and exercises, ideas about learning using near misses, ideas about generalizing if-then rules, and ideas about using censors to prevent procedure misapplication. The synthesis enables two extensions to an implemented system that solves problems involving precedents and exercises and that generates if-then rules as a byproduct. These extensions are as follows: If-then rules are augmented by if-plausible conditions, creating augmented if-then rules. An augmented if-then rule is blocked whenever facts in hand directly deny the truth of an if-plausible condition. When an augmented if-then rule is used to deny the truth of an if-plausible condition, the rule is called a censor. Like ordinary augmented if-then rules, censors can be learned. Definition rules are introduced that facilitate graceful refinement. The definition rules are also augmented if-then rules. They work by virtue of if-plausible entries that capture certain nuances of meaning different from those expressible by necessary conditions. Like ordinary augmented if-then rules, definition rules can be learned. The strength of the ideas is illustrated by way of representative experiments. All of these experiments have been performed with an implemented system.",
"In its earliest drafts, this paper was a structured argument, presenting a comprehensive view of cognitive science, criticizing prevailing approaches to the study of language and thought and advocating a new way of looking at things. Although I strongly believed in the approach it outlined, somehow it didn’t have the convincingness on paper that it had in my own reflection. After some discouraging attempts at reorganization and rewriting, I realized that there was a mismatch between the nature of what I wanted to say and the form in which I was trying to communicate. The understanding on which it was based does not have the form of a carefully structured framework into which all of cognitive science can be placed. It is more an orientation-a way of approaching the phenomena-that has grown out of many different experiences and influences and that bears the marks of its history. I found myself wanting to describe a path rather than justify its destination, finding that in the flow, the ideas came across more clearly. Since this collection was envisioned as a panorama of contrasting individual views, I have taken the liberty of making this chapter explicitly personal and describing the evolution of my own understanding. My interests have centered around natural language. I have been engaged in the design of computer programs that in some sense could be said to “understand language, ’ ’ and this has led to looking at many aspects of the problems, including theories of meaning, representation formalisms, and the design and construction of complex computer systems. There has been a continuous evolution in my understanding of just what it means to say that a person or computer “understands,” and this story’ can be read as recounting that evolution. It is",
"As part of the study of human information search strategy, the concept of the search tactic, or move made to further a search, is introduced. Twenty-nine tactics are named, defined, and discussed in four categories: monitoring, file structure, search formulation, and term. Implications of the search tactics for research in search strategy are considered. The search tactics are intended to be practically useful in information searching. This approach to searching is designed to be general, yet nontrivial; it is applicable to both bibliographic and reference searches and in both manual and on-line systems.",
"The purpose of this book is to teach the principles of interactive bibliographic searching, or information retrieval to those with little or no prior experience. The major intended audiences are students, working information specialists and librarians, and end users, the people for whom all the searching is done. Material covers such topics as the nature of online searching, the kinds of files available online, the commands used to search for them, and the strategy used to perform a search - with numerous examples from real systems presented."
],
"authors": [
{
"name": [
"B. Webber",
"A. Joshi"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"P. Winston"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"Terry Winograd"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"M. Bates"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"C. T. Meadow",
"P. Cochrane"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
}
],
"arxiv_id": [
null,
null,
null,
null,
null
],
"s2_corpus_id": [
"12498617",
"117901545",
"263879847",
"10146203",
"60902589"
],
"intents": [
[],
[],
[
"methodology"
],
[],
[]
],
"isInfluential": [
false,
false,
false,
false,
false
]
} | Problem: Constructing natural language interfaces to computer systems requires advanced reasoning and expert capabilities in addition to basic natural language understanding.
Solution: This paper proposes the design of a natural language interface, named IR-NLi, for accessing online information retrieval systems, focusing on natural language understanding, reasoning, and dialogue capabilities to allow non-technical users direct access to online databases. | 504 | 0.03373 | null | null | null | null | null | null | null | null |
fcbcb0011c0d58abdd36f0ab98495b43958afa47 | 2937580 | null | Application of the {L}iberman-{P}rince Stress Rules to Computer Synthesized Speech | Computer synthesized speech is and will continue to be an important feature of many artificially intelligent systems. Although current computer synthesized speech is intelligible, it cannot yet pass a Turing test. One avenue for improving the intelligibility of computer synthesized speech and for making it more human-like is to incorporate stress patterns on words. But to achieve this improvement, a set of stress prediction rules amenable to computer implementation is needed. This paper evaluates one such theory for predlcting stress, that of Liberman and Prince. It first gives an overview of the theory and then discusses modifications which were necessary for computer implementation. It then describes an experiment which was performed to determine the model's strengths and shortcomings. The paper concludes with the results of that study. | {
"name": [
"McPeters, David L. and",
"Tharp, Alan L."
],
"affiliation": [
null,
null
]
} | null | null | First Conference on Applied Natural Language Processing | 1983-02-01 | 12 | 4 | null | Since speech is such an important component of human activities, it is essential that it be included in computer systems simulating human behavior or performing human tasks. Advantages of interacting with a computer system capable of speech include tha= a) special equipment (e.g. a terminal) is unnecessary for receiving output from the device.b) the output may be communicated to several people simultaneously. c) it m~y be used to gain someone's attention.d) it is useful in communicating information in an emergency. *Current address: Bell Laboratories, Indianapolis, Indiana 46219.The primary methods for generating computer synthesized speech are i) to use a lexicon of word pronunciations and then assemble a message from these stored words or 2) to use a letter-to-sound translator.A shortcoming common to both methods, and of interest to linguists and more recently computer scientists, is the inclusion of English prosody in computer synthesized speech e.g. Klatt [6] , Lehlste [8] , Wltten et al [ll] and Hill [5] . Of the three primary components of English prosody, this paper considers only stress (the other two are intonation and pause).It applies the theory for stress prediction proposed by linguists Mark Liberman and Alan Prince [9] to computer synthesized speech.Their theory was chosen primarily as a result of it having received widespread attention since its introduction (see Paradls [lO] , Yip [12] , FuJimura [3 and 4] and Basboll [2] ).In addition to the attention it received, the Liberman-Prince model [9] (hereafter referred to as rhe LP model) is attractive for computer application for two other reasons.First, the majority of its rules can be applied without knowledge of the lexical category (part-of-speech) of the word being processed since the rules are based only on the sequences and attributes of letters in a word. This feature is especially important in an unrestricted text-to-speech translation system. Secondly, since the metrical trees that define the prominence relations are a common data structure, a computer model may be designed which remains very close to the foundations and intentions of the theoretical model. This section will summarize the LP theory as presented in [9] . The LP method of predicting stress focuses on two attributes of vowels: ÷ or -!on~ and + or -low.The ~ of b~e is +lon~ while the £ of ~ is -lonE. Each of the vowels has both a + and -lon~ pronunciation.For example: state, sat, pint, pin, snow, pot, cute, and cup. The attribute + or -low is named for the height of the tongue in the mouth during articulation of the sound (see Figure i) . During production of a +low vowel, the tongue is low in the mouth while it is high for a -lo.~w vowel.Speaking aloud the words in the figure demonstrates this difference. front back As the names imply, the first and second rules deal with assignment of + or -stress, while the third predicts which vowels should belong. All three rules operate within a word from right to left.In the first stage, the shape of the penultimate (next-to-last) syllable determines the assignment of the + stress attribute using the ESR rule. "If the penultimate vowel is short and followed by (at most) one consonant, then stress falls on the preceding syllable," [9] as in [9] Each of ~he previous statements assumes the final vowel is short. The fourth case of the ESR says thac if the final vowel is long then ic must bear stress, Table l(d). (See [9] for exceptions Co this first stage.) ~n the second stage, the +stress attribute is assigned based on the position of the leftmost +stress vowel in the word.Since the rule retracts stress across the word It is called the Stress Retraction Rule (SRR).The ESR and SRR mark certain vowels to be stressed; this however does not imply that when the word is spoken, each of the vowels will be stressed.There are instances, depending on the characteristics of the word, where vowels will lose their stress through the application of the English Destressin8 Rule (EDR).The EDR depends on the notion of metrical crees whose purpose it is to give an alternating rhythm to the syllables of a word and define the relative prominence of each syllable within the word.Rhythm is reflected by the assignment of the actrlbuce ~, strong, to stressed syllables and w, weak, co unstressed syllables.For the words labor, ca?rlce, and Pamela the trees are simple (see Figure 2 ). The first rule in building the tree is if the vowel is -stress then its attribute is ~, if the vowel is +stress then it may be ~ or w. The root node of any independent subtree or the root node of the final tree is not labeled. The ~ E labeling defines a contrast between two adjacent components of a word; therefore, a SOfitary s or E would have no meaning. Each time a +stress is assigned by either the ESR or the SRR an attempt is made to add co the tree. As in the word labor a node is added to the tree and the vowels are marked s or w according to their stress markings, + or -. Next, any unattached vowels co the rlghc of the new node are added, as wlch Pamela.This builds a series of binary subcrees chat are necessarily left branchin~ (see Figure 3 ). There are some situations where nothing can be added to the tree after the assignment of +stress.Such words cause a rephrasing o{ the second step above to become: next attach any vowels to the right of the present vowel that have not been attached durin 8 the operation of a previous rule.These t%/o steps allow trees such as those in Figure 4 to be formed. Two questions remain. How is the tree completed? How are the ~, ~ relations defined above the vowel level?To answer the first question; after all unattached vowels to the right have been attached into a left branching subtree, this subtree is joined to the highest node of the subtree immediately to the right, if it exists (see Figure 5) . To insure that all vowels are included in the tree, one final step is necessary as illustrated by the word Monongahela.A S W S %/ W S W W %/ + + -- -- + -- --Following the rules as previously outlined will generate a stress assignment and tree such as that in Figure 6(a) .The first vowel must be included in the tree to produce Figure 6(b) , This is done as the last stage of tree building. The LCPR is used in this case to Join the vowel and the tree structure and to assign ~, w values.Io!o=LL! ++ -+-++ -+- Figure 6 . Final step in treebuilding.The English Destressin8 Rule (EDR) is used to determ/ne which vowels should be reduced.Generally t%/o things happen when a vowel is reduced. First, it will lose its +stress attribute and secondly, the vowel sound will be reduced to a schwa (an indeterminate sound in many unstressed syllables, e.g. the leading ~ in America).The rule is based on the tree prominance relations of the uuetrical trees, and is restricted to operating on only those vowels that have been marked +stress by either the ESR or SKE (see [9] ).Rule (see [9] ) is applied to handle apparent exceptions in the operation of the ESR, e.g. words such as alien, simultaneous, radium and labia which contain a vowel sequence preceding the vowel to be stressed.I~LE~iENTAT I ON Converting a theoretical model such as tha: proposed by LP into a computerized implementation poses problems. One concern is whether she rules and definitions of the theory are well suited to a computer implementation, or if not, must they be transformed to such an extent that they no longer resemble the originals? Fortunately the LP theory is expressed in rules and definitions that easily lend themselves to an implementation.Overcoming other problems while remaining close to the LP theory involves a careful combination of three factors. First, certain modifications must be made with the application of the rules for locating the +stress attribute and building metrical trees.Second, several assumptions must be made about the exact definitions of the terms such as VOWEL and CONSONANT.Third, some of the rules which are too general must be restricted. None of these modifications causes a drastic reshaping of the model. Three outcomes exist for a word being processed by such a system. One, the stress pattern of the word will be correctly predicted.Two, the stress pattern of the word will be incorrectly predicted.Three, the word will drop through without the system being able to predict any stress. Any modifications, assumptions or reetrictioas imposed should be done with the primary intent of reducing the number of words for which an incorrect stress pattern is predicted, even if this means increasing the number of words which drop through.One modlflcation was to use a phonetic translation of the word instead of its s~andard spelling. This ~eant working from an underlying representation rather than the surface representation.By working from the underlying representation, the attributes +-stress, and +-low could be dlfferenflared from the phonetic alphabet character directly because a +lon~ vowel and a -lon 8 vowel would be represented by two different characters in the phonetic alphabet.Four immediate results occur from maklng this modification.First, single consonant sounds such as the t_hhln thln~ are represented by a single character.However, the same is not true for dlpthongs.Both IPA symbols and VOTRAX codes (a VOTRAX ML-I speech synthesizer was used to output the results of the stress prediction) for dlpthongs are multiple character codes. Second, in a phonetic translatlon all reduced vowels are already reduced.Therefore for the most part the EDR is of llttle value.It only retains its usefulness for initial syllables that are not stressed but whose vowel is not schwa. This syllable will draw stress by the SRR creating a situation for the EDR to apply. Third, the ESR and SRR also operate less freely because they will not apply stress to a schwa. Fourth, a new rule is required to operate in conjunction with the EVL. This rule must give a final +!on~ vowel, such as the ~ in stor~, the -lon~ attribute so that the ESR can correctly assign stress.A second change was that the SRR could be applied in accordance with the principle of disjunctlve ordering.This situation results from the fact that a translator system has no lexicon. Although the words therefore cannot be marked for a particular type of s~rees retraction (SRR), it does not cause a major problem.One implication of these modifications is the sequential ordering of the rules which group words into classes based solely on the characteristics of their phonetic translation.Therefore any set of stress rules should be organized in terms of a 'best fi~' mode of application.Secondly, the stress rules cannot be defined in a way that can differentiate syllable boundaries, so no rule can be based on the concept of a 'light' or 'heavy' syllable. Although the stress rule input form does allow an affix option, it should be kept in mind that the e nn of enforce is considered a prefix as well as the ann of English.Finally, there can be no distinction between words based on the word stem or the word origin, except, in the case of word origin, if it can be defined in terms of a dlstinc~ affix.For example the Greek prefix hetero in: heterodox, heter0ny ~, or heterosexual is a candidate for long retraction by the SRR.Although the application model is a modified version of the LP model, it still operates in the manner of their original intent.An experiment was conducted to evaluate stress placemenc using the computerized version of the LP model. A random sample of unique English words and their correct phonetic translations used for the axperlment was selected from the American Heritage Dictionary [i] . Five hundred pairs of random numbers were generated; the first number in the pair was a random number between one and the page number of the last page in the dictionary and the second one was a random number between one and sixty.For each pair, the first number was the page on which the random word was to be found and the second number, 2, determined the word to be the ~'th on the page.If ~ was larger than the actual number of words on the page, then n modulo the number of words on the page was used.If the selected word was not polysyllabic, It was rejected.Using this technique, 357 unique random words were selected.Each word was translated into ASCII codes for the VOTRAX according to the phonetic translation in the dictionary. These translations were then given as input to the stress system.Because the words in the random sample contain combinatlons of primary, secondary, and tertiary stress, several methods arise for evaluatlng the results (listed in the order of importance):i) The number of words completely correct, the number of words incorrect, and the number of words which dropped through.2) The number of times primary, secondary, and tertiar 7 stress were each individually predicted correctly regardless of the other two.3) The number of times when secondary or tertiary stress was incorrectly predicted.4) The number of rimes secondary or tertiary stress was predicted but the word did not require it.5) The number of times secondary or tertiary stress was needed but not predicted.The figures for the first evaluation are shown in Table 2 . The totally correct words are slightly under two thirds of the entire sample.However, when the words with correct stress and the words which fell through are combined, the total is slightly over 70X. The results of the second evaluation are shown in Table 3 . While primary stress is predicted correctly in 75% of the cases, secondary stress is only 53Z and tertiary stress occurs too infrequently to make any observations.The number in parentheses in Table 3 indicates the total number of the particular stress level required.words of Table 2 . The importance of this fact appears when one considers that the stress pattern is partially correct, but is not distortec by incorrect stressing.Therefore even though partial, this stress pattern would be an improvement. If these words are now combined with the totally correct words and those which dropped through, they equal 291 words or 81.51%, i.e. almost 82~ of the words can be stressed totally, partially, or left unchanged. The third evaluation results are shown in Table 4 . The 19Z in which secondary stress was placed on the wrong syllable is small but still significant.Again tertiary stress occurrences were too few to make observations. With 63.3% of the sample words completely correct, 73.10% of the sample words completely or partially correct, 8.4% unmodified and 18.49% in error, this test has demonstrated that the stress model defined by the stress system and its input rules does work in a substantial percentage of cases.Of the 66 words that were incorrectly stressed, most fall into one of four categories. I) Two syllable words where the vowel pattern is -lons -lon~ or +lons +lon~ and the last syllable is stressed.In these cases the stress system incorrectly assigns stress to the first vowel: e.g., transact, mistrust.2) Words in which the ESR or SKR skips over syllables that should be stressed, e.g. isodynamic, epox-/, comprehend, remitter, inopportune.The results of the fourth test are given in Table 5 . Considering that there were 357 words in the sample, this is a relatively small number of erroneous predictions. Finally the fifth evaluation leads to Table 6 . This table shows the number of times secondary or tertiary stress was required but not predicted. An interpretation of this table suggests that for 35 words which needed both primary and secondary stress, only primary stress was predicted. These words are also included in the incorrectly stressed 3) When in a two syllable word, the word stem vowel is short and the prefix or suffix vowel is long, the long vowel is marked for stress, e.g. fancied.4) The LCPR does not correctly assign nodes ~, ~, values, e.g. contumacy, Kastight.Each of these groups is an exception to a larger group whose stress patterns fit the predicted patterns.A final question is: How well does this system predict stress in the most common English words? Of the 200 most common, 162 have a single vowel in their phonetic translation and therefore would drop through the system without being modified. Of the 38 remaining words, 33 are correctly stressed by the stress system, leaving 5 incorrectly stressed.However, since these are the most common of words of English, it would seem reasonable to include these words as special rules in the rule system of the translator and not allow the stress system to operate on them.Computer synthesized speech and linguistic theories for predicting stress can interact with one another to mutual benefit. Computer synthesized speech techniques can be used to evaluate the linguistic theory. Just as computers have been used so often to evaluate theories in other disclpllnes, so too can ~hey be used in linguistics. The organizationt speed, accuracy and unblasedness of the computer makes it superior to a person in many respects for Judging a hypothesis.On the other hand, the linguistic theories can provide a substantial base on which to build language components of artificially intelligent systems. The intelligibility of computer synthesized speech can be improved with the application of linguistic theories for predicting stress such as that proposed by Liberman and Prince.Evaluations such as that presented in this paper will be of value not only in comparing competing theories but will also be helpful in determ/ning whether the accuracy of a theory's predlctions is acceptable for a particular application and where improvements ,my be made to the theory. | null | null | null | null | Main paper:
introduction:
Since speech is such an important component of human activities, it is essential that it be included in computer systems simulating human behavior or performing human tasks. Advantages of interacting with a computer system capable of speech include tha= a) special equipment (e.g. a terminal) is unnecessary for receiving output from the device.b) the output may be communicated to several people simultaneously. c) it m~y be used to gain someone's attention.d) it is useful in communicating information in an emergency. *Current address: Bell Laboratories, Indianapolis, Indiana 46219.The primary methods for generating computer synthesized speech are i) to use a lexicon of word pronunciations and then assemble a message from these stored words or 2) to use a letter-to-sound translator.A shortcoming common to both methods, and of interest to linguists and more recently computer scientists, is the inclusion of English prosody in computer synthesized speech e.g. Klatt [6] , Lehlste [8] , Wltten et al [ll] and Hill [5] . Of the three primary components of English prosody, this paper considers only stress (the other two are intonation and pause).It applies the theory for stress prediction proposed by linguists Mark Liberman and Alan Prince [9] to computer synthesized speech.Their theory was chosen primarily as a result of it having received widespread attention since its introduction (see Paradls [lO] , Yip [12] , FuJimura [3 and 4] and Basboll [2] ).In addition to the attention it received, the Liberman-Prince model [9] (hereafter referred to as rhe LP model) is attractive for computer application for two other reasons.First, the majority of its rules can be applied without knowledge of the lexical category (part-of-speech) of the word being processed since the rules are based only on the sequences and attributes of letters in a word. This feature is especially important in an unrestricted text-to-speech translation system. Secondly, since the metrical trees that define the prominence relations are a common data structure, a computer model may be designed which remains very close to the foundations and intentions of the theoretical model. This section will summarize the LP theory as presented in [9] . The LP method of predicting stress focuses on two attributes of vowels: ÷ or -!on~ and + or -low.The ~ of b~e is +lon~ while the £ of ~ is -lonE. Each of the vowels has both a + and -lon~ pronunciation.For example: state, sat, pint, pin, snow, pot, cute, and cup. The attribute + or -low is named for the height of the tongue in the mouth during articulation of the sound (see Figure i) . During production of a +low vowel, the tongue is low in the mouth while it is high for a -lo.~w vowel.Speaking aloud the words in the figure demonstrates this difference. front back As the names imply, the first and second rules deal with assignment of + or -stress, while the third predicts which vowels should belong. All three rules operate within a word from right to left.In the first stage, the shape of the penultimate (next-to-last) syllable determines the assignment of the + stress attribute using the ESR rule. "If the penultimate vowel is short and followed by (at most) one consonant, then stress falls on the preceding syllable," [9] as in [9] Each of ~he previous statements assumes the final vowel is short. The fourth case of the ESR says thac if the final vowel is long then ic must bear stress, Table l(d). (See [9] for exceptions Co this first stage.) ~n the second stage, the +stress attribute is assigned based on the position of the leftmost +stress vowel in the word.Since the rule retracts stress across the word It is called the Stress Retraction Rule (SRR).The ESR and SRR mark certain vowels to be stressed; this however does not imply that when the word is spoken, each of the vowels will be stressed.There are instances, depending on the characteristics of the word, where vowels will lose their stress through the application of the English Destressin8 Rule (EDR).The EDR depends on the notion of metrical crees whose purpose it is to give an alternating rhythm to the syllables of a word and define the relative prominence of each syllable within the word.Rhythm is reflected by the assignment of the actrlbuce ~, strong, to stressed syllables and w, weak, co unstressed syllables.For the words labor, ca?rlce, and Pamela the trees are simple (see Figure 2 ). The first rule in building the tree is if the vowel is -stress then its attribute is ~, if the vowel is +stress then it may be ~ or w. The root node of any independent subtree or the root node of the final tree is not labeled. The ~ E labeling defines a contrast between two adjacent components of a word; therefore, a SOfitary s or E would have no meaning. Each time a +stress is assigned by either the ESR or the SRR an attempt is made to add co the tree. As in the word labor a node is added to the tree and the vowels are marked s or w according to their stress markings, + or -. Next, any unattached vowels co the rlghc of the new node are added, as wlch Pamela.This builds a series of binary subcrees chat are necessarily left branchin~ (see Figure 3 ). There are some situations where nothing can be added to the tree after the assignment of +stress.Such words cause a rephrasing o{ the second step above to become: next attach any vowels to the right of the present vowel that have not been attached durin 8 the operation of a previous rule.These t%/o steps allow trees such as those in Figure 4 to be formed. Two questions remain. How is the tree completed? How are the ~, ~ relations defined above the vowel level?To answer the first question; after all unattached vowels to the right have been attached into a left branching subtree, this subtree is joined to the highest node of the subtree immediately to the right, if it exists (see Figure 5) . To insure that all vowels are included in the tree, one final step is necessary as illustrated by the word Monongahela.A S W S %/ W S W W %/ + + -- -- + -- --Following the rules as previously outlined will generate a stress assignment and tree such as that in Figure 6(a) .The first vowel must be included in the tree to produce Figure 6(b) , This is done as the last stage of tree building. The LCPR is used in this case to Join the vowel and the tree structure and to assign ~, w values.Io!o=LL! ++ -+-++ -+- Figure 6 . Final step in treebuilding.The English Destressin8 Rule (EDR) is used to determ/ne which vowels should be reduced.Generally t%/o things happen when a vowel is reduced. First, it will lose its +stress attribute and secondly, the vowel sound will be reduced to a schwa (an indeterminate sound in many unstressed syllables, e.g. the leading ~ in America).The rule is based on the tree prominance relations of the uuetrical trees, and is restricted to operating on only those vowels that have been marked +stress by either the ESR or SKE (see [9] ).Rule (see [9] ) is applied to handle apparent exceptions in the operation of the ESR, e.g. words such as alien, simultaneous, radium and labia which contain a vowel sequence preceding the vowel to be stressed.I~LE~iENTAT I ON Converting a theoretical model such as tha: proposed by LP into a computerized implementation poses problems. One concern is whether she rules and definitions of the theory are well suited to a computer implementation, or if not, must they be transformed to such an extent that they no longer resemble the originals? Fortunately the LP theory is expressed in rules and definitions that easily lend themselves to an implementation.Overcoming other problems while remaining close to the LP theory involves a careful combination of three factors. First, certain modifications must be made with the application of the rules for locating the +stress attribute and building metrical trees.Second, several assumptions must be made about the exact definitions of the terms such as VOWEL and CONSONANT.Third, some of the rules which are too general must be restricted. None of these modifications causes a drastic reshaping of the model. Three outcomes exist for a word being processed by such a system. One, the stress pattern of the word will be correctly predicted.Two, the stress pattern of the word will be incorrectly predicted.Three, the word will drop through without the system being able to predict any stress. Any modifications, assumptions or reetrictioas imposed should be done with the primary intent of reducing the number of words for which an incorrect stress pattern is predicted, even if this means increasing the number of words which drop through.One modlflcation was to use a phonetic translation of the word instead of its s~andard spelling. This ~eant working from an underlying representation rather than the surface representation.By working from the underlying representation, the attributes +-stress, and +-low could be dlfferenflared from the phonetic alphabet character directly because a +lon~ vowel and a -lon 8 vowel would be represented by two different characters in the phonetic alphabet.Four immediate results occur from maklng this modification.First, single consonant sounds such as the t_hhln thln~ are represented by a single character.However, the same is not true for dlpthongs.Both IPA symbols and VOTRAX codes (a VOTRAX ML-I speech synthesizer was used to output the results of the stress prediction) for dlpthongs are multiple character codes. Second, in a phonetic translatlon all reduced vowels are already reduced.Therefore for the most part the EDR is of llttle value.It only retains its usefulness for initial syllables that are not stressed but whose vowel is not schwa. This syllable will draw stress by the SRR creating a situation for the EDR to apply. Third, the ESR and SRR also operate less freely because they will not apply stress to a schwa. Fourth, a new rule is required to operate in conjunction with the EVL. This rule must give a final +!on~ vowel, such as the ~ in stor~, the -lon~ attribute so that the ESR can correctly assign stress.A second change was that the SRR could be applied in accordance with the principle of disjunctlve ordering.This situation results from the fact that a translator system has no lexicon. Although the words therefore cannot be marked for a particular type of s~rees retraction (SRR), it does not cause a major problem.One implication of these modifications is the sequential ordering of the rules which group words into classes based solely on the characteristics of their phonetic translation.Therefore any set of stress rules should be organized in terms of a 'best fi~' mode of application.Secondly, the stress rules cannot be defined in a way that can differentiate syllable boundaries, so no rule can be based on the concept of a 'light' or 'heavy' syllable. Although the stress rule input form does allow an affix option, it should be kept in mind that the e nn of enforce is considered a prefix as well as the ann of English.Finally, there can be no distinction between words based on the word stem or the word origin, except, in the case of word origin, if it can be defined in terms of a dlstinc~ affix.For example the Greek prefix hetero in: heterodox, heter0ny ~, or heterosexual is a candidate for long retraction by the SRR.Although the application model is a modified version of the LP model, it still operates in the manner of their original intent.An experiment was conducted to evaluate stress placemenc using the computerized version of the LP model. A random sample of unique English words and their correct phonetic translations used for the axperlment was selected from the American Heritage Dictionary [i] . Five hundred pairs of random numbers were generated; the first number in the pair was a random number between one and the page number of the last page in the dictionary and the second one was a random number between one and sixty.For each pair, the first number was the page on which the random word was to be found and the second number, 2, determined the word to be the ~'th on the page.If ~ was larger than the actual number of words on the page, then n modulo the number of words on the page was used.If the selected word was not polysyllabic, It was rejected.Using this technique, 357 unique random words were selected.Each word was translated into ASCII codes for the VOTRAX according to the phonetic translation in the dictionary. These translations were then given as input to the stress system.Because the words in the random sample contain combinatlons of primary, secondary, and tertiary stress, several methods arise for evaluatlng the results (listed in the order of importance):i) The number of words completely correct, the number of words incorrect, and the number of words which dropped through.2) The number of times primary, secondary, and tertiar 7 stress were each individually predicted correctly regardless of the other two.3) The number of times when secondary or tertiary stress was incorrectly predicted.4) The number of rimes secondary or tertiary stress was predicted but the word did not require it.5) The number of times secondary or tertiary stress was needed but not predicted.The figures for the first evaluation are shown in Table 2 . The totally correct words are slightly under two thirds of the entire sample.However, when the words with correct stress and the words which fell through are combined, the total is slightly over 70X. The results of the second evaluation are shown in Table 3 . While primary stress is predicted correctly in 75% of the cases, secondary stress is only 53Z and tertiary stress occurs too infrequently to make any observations.The number in parentheses in Table 3 indicates the total number of the particular stress level required.words of Table 2 . The importance of this fact appears when one considers that the stress pattern is partially correct, but is not distortec by incorrect stressing.Therefore even though partial, this stress pattern would be an improvement. If these words are now combined with the totally correct words and those which dropped through, they equal 291 words or 81.51%, i.e. almost 82~ of the words can be stressed totally, partially, or left unchanged. The third evaluation results are shown in Table 4 . The 19Z in which secondary stress was placed on the wrong syllable is small but still significant.Again tertiary stress occurrences were too few to make observations. With 63.3% of the sample words completely correct, 73.10% of the sample words completely or partially correct, 8.4% unmodified and 18.49% in error, this test has demonstrated that the stress model defined by the stress system and its input rules does work in a substantial percentage of cases.Of the 66 words that were incorrectly stressed, most fall into one of four categories. I) Two syllable words where the vowel pattern is -lons -lon~ or +lons +lon~ and the last syllable is stressed.In these cases the stress system incorrectly assigns stress to the first vowel: e.g., transact, mistrust.2) Words in which the ESR or SKR skips over syllables that should be stressed, e.g. isodynamic, epox-/, comprehend, remitter, inopportune.The results of the fourth test are given in Table 5 . Considering that there were 357 words in the sample, this is a relatively small number of erroneous predictions. Finally the fifth evaluation leads to Table 6 . This table shows the number of times secondary or tertiary stress was required but not predicted. An interpretation of this table suggests that for 35 words which needed both primary and secondary stress, only primary stress was predicted. These words are also included in the incorrectly stressed 3) When in a two syllable word, the word stem vowel is short and the prefix or suffix vowel is long, the long vowel is marked for stress, e.g. fancied.4) The LCPR does not correctly assign nodes ~, ~, values, e.g. contumacy, Kastight.Each of these groups is an exception to a larger group whose stress patterns fit the predicted patterns.A final question is: How well does this system predict stress in the most common English words? Of the 200 most common, 162 have a single vowel in their phonetic translation and therefore would drop through the system without being modified. Of the 38 remaining words, 33 are correctly stressed by the stress system, leaving 5 incorrectly stressed.However, since these are the most common of words of English, it would seem reasonable to include these words as special rules in the rule system of the translator and not allow the stress system to operate on them.Computer synthesized speech and linguistic theories for predicting stress can interact with one another to mutual benefit. Computer synthesized speech techniques can be used to evaluate the linguistic theory. Just as computers have been used so often to evaluate theories in other disclpllnes, so too can ~hey be used in linguistics. The organizationt speed, accuracy and unblasedness of the computer makes it superior to a person in many respects for Judging a hypothesis.On the other hand, the linguistic theories can provide a substantial base on which to build language components of artificially intelligent systems. The intelligibility of computer synthesized speech can be improved with the application of linguistic theories for predicting stress such as that proposed by Liberman and Prince.Evaluations such as that presented in this paper will be of value not only in comparing competing theories but will also be helpful in determ/ning whether the accuracy of a theory's predlctions is acceptable for a particular application and where improvements ,my be made to the theory.
Appendix:
| null | null | null | null | {
"paperhash": [
"fujimura|modern_methods_of_investigation_in_speech_production",
"fujimura|perception_of_stop_consonants_with_conflicting_transitional_cues:_a_cross-linguistic_study",
"klatt|linguistic_uses_of_segmental_duration_in_english:_acoustic_and_perceptual_evidence.",
"yip|the_metrical_structure_of_regulated_verse",
"ladefoged|a_course_in_phonetics"
],
"title": [
"Modern Methods of Investigation in Speech Production",
"Perception of Stop Consonants with Conflicting Transitional Cues: A Cross-Linguistic Study",
"Linguistic uses of segmental duration in English: acoustic and perceptual evidence.",
"THE METRICAL STRUCTURE OF REGULATED VERSE",
"A course in phonetics"
],
"abstract": [
"Abstract Methodologies of speech research with respect to the production processes are discussed, with an emphasis on the recent development of new instrumental techniques. It is argued that systematic studies of large amounts of speech data are necessary to understand the basic characteristics of speech. The traditional notion of phoneme-size segments seems inappropriate for interpreting multidimensional articulatory movements by a concatenative model. Experimental means such as a computer-controlled X-ray microbeam technique and advanced statistical processing, in combination with a new theoretical framework of phonetic description, promise future development.",
"In vowel-consonant-vowel (VCV) utterances, the place of articulation of the consonant is cued by formant transitions both out of the first vowel and into the second vowel. When the two transitions provide conflicting place cues, which dominates perception? The present experiment compared Japanese and American English listeners' perceptions of VCV stimuli in which the consonantal transitions conflicted. Conflicting transition stimuli were created from naturally produced Japanese disyllabic forms, with two accent patterns: low-high and high-low. Results of the experiment indicated that the transitions into a vowel generally outweigh the transitions out of a vowel. Further, this effect was found to be a function not of speech production factors, but rather of perceptual factors. In addition, Japanese and American English listeners responded differentially to the accent pattern of the stimuli. American English listeners showed a greater tendency to identify the intervocalic consonant according to the out-of-vowel transitions when the accent pattern was high-low than when the accent pattern was low-high. In contrast, Japanese listeners' judgments were unaffected by the accent pattern of the stimuli. The results are discussed with reference to differences in the linguistic function of VC transitions and to differences in syllable structure between the two languages.",
"The pattern of durations of individual phonetic segments and pauses conveys information about the linguistic content of an utterance. Acoustic measures of segmental timing have been used by many investigators to determine the variables that influence the durational structure of a sentence. The literature on segmental duration is reviewed and related to perceptual data on the discrimination of duration and to psychophysical data on the ability of listeners to make linguistic decisions on the basis of durational cues alone. We conclude that, in English, duration often serves as a primary perceptual cue in the distinctions between (1) inherently long verses short vowels, (2) voiced verses voiceless fricatives, (3) phrase‐final verses non‐final syllables, (4) voiced versus voiceless postvocalic consonants, as indicated by changes to the duration of the preceding vowel in phrase‐final positions, (5) stressed verses unstressed or reduced vowels, and (6) the presence or absence of emphasis.Subject Classification...",
"这篇文章为陈渊泉(1979)有关汉语诗律的论点提供证据。並以他的理论把(一)律诗朗诵时的节奏(二)王力所谓“一三五不论 二四六分明”的原则以及(三)句法与格律的相互关系等三个问题放在一起统一处理。依此理论本文同时说明何以诗词在吟诵之时使用轻声的机会極少。这是因为音步结构树上的节有強(S)弱(W)之分而轻声不能出现在強节之下的缘故。同一理论也可以解釋诗人什么时候可以脫离正常的格律來作诗,什么时候不可以。我们常见一行诗中二、四字同调而违反正规这也可以用同样的理由來说明。最后本文以一個应用極广的原则把汉语律诗结构树上的各个节统一规定为強或弱。如此一來陈文中所提议有关调类排列的一个限制便屬不必而可以免除了。",
"Part I Introductory concepts: articulatory phonetics phonology and phonetic transcription. Part II English phonetics: the Consonants of English English vowels English words and sentences. Part III General phonetics: airstream mechanisms and phonation types place and manner of articulation acoustic phonetics vowels and vowel-like articulations syllables and suprasegmental features linguistic phonetics the international phonetic alphabet feature hierarchy performance exercises."
],
"authors": [
{
"name": [
"O. Fujimura"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"Osamu Fujimura",
"M. J. Macchi",
"L. A. Streeter"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"D. Klatt"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"M. Yip"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"P. Ladefoged"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
}
]
}
],
"arxiv_id": [
null,
null,
null,
null,
null
],
"s2_corpus_id": [
"46802145",
"8777222",
"27705481",
"227514129",
"62575173"
],
"intents": [
[],
[],
[],
[],
[]
],
"isInfluential": [
false,
false,
false,
false,
false
]
} | null | 504 | 0.007937 | null | null | null | null | null | null | null | null |
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.